Sample records for maximum margin criterion

  1. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  2. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  3. Core-log integration for rock mechanics using borehole breakouts and rock strength experiments: Recent results from plate subduction margins

    NASA Astrophysics Data System (ADS)

    Saito, S.; Lin, W.

    2014-12-01

    Core-log integration has been applied for rock mechanics studies in scientific ocean drilling since 2007 in plate subduction margins such as Nankai Trough, Costa Rica margin, and Japan Trench. State of stress in subduction wedge is essential for controlling dynamics of plate boundary fault. One of the common methods to estimate stress state is analysis of borehole breakouts (drilling induced borehole wall compressive failures) recorded in borehole image logs to determine the maximum horizontal principal stress orientation. Borehole breakouts can also yield possible range of stress magnitude based on a rock compressive strength criterion. In this study, we constrained the stress magnitudes based on two different rock failure criteria, the Mohr-Coulomb (MC) criteria and the modified Wiebols-Cook (mWC) criteria. As the MC criterion is the same as that under unconfined compression state, only one rock parameter, unconfined compressive strength (UCS) is needed to constrain stress magnitudes. The mWC criterion needs the UCS, Poisson's ratio and internal frictional coefficient determined by triaxial compression experiments to take the intermediate principal stress effects on rock strength into consideration. We conducted various strength experiments on samples taken during IODP Expeditions 334/344 (Costa Rica Seismogenesis Project) to evaluate reliable method to estimate stress magnitudes. Our results show that the effects of the intermediate principal stress on the rock compressive failure occurred on a borehole wall is not negligible.

  4. A criterion for maximum resin flow in composite materials curing process

    NASA Astrophysics Data System (ADS)

    Lee, Woo I.; Um, Moon-Kwang

    1993-06-01

    On the basis of Springer's resin flow model, a criterion for maximum resin flow in autoclave curing is proposed. Validity of the criterion was proved for two resin systems (Fiberite 976 and Hercules 3501-6 epoxy resin). The parameter required for the criterion can be easily estimated from the measured resin viscosity data. The proposed criterion can be used in establishing the proper cure cycle to ensure maximum resin flow and, thus, the maximum compaction.

  5. A multiloop generalization of the circle criterion for stability margin analysis

    NASA Technical Reports Server (NTRS)

    Safonov, M. G.; Athans, M.

    1979-01-01

    In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.

  6. Relations between the efficiency, power and dissipation for linear irreversible heat engine at maximum trade-off figure of merit

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2018-03-01

    A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \

  7. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  8. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    PubMed

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  9. Thermoelectric energy converters under a trade-off figure of merit with broken time-reversal symmetry

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2017-09-01

    We study the performance of a three-terminal thermoelectric device such as heat engine and refrigerator with broken time-reversal symmetry by applying the unified trade-off figure of merit (\\dotΩ criterion) which accounts for both useful energy and losses. For the heat engine, we find that a thermoelectric device working under the maximum \\dotΩ criterion gives a significantly better performance than a device working at maximum power output. Within the framework of linear irreversible thermodynamics such a direct comparison is not possible for refrigerators, however, our study indicates that, for refrigerator, the maximum cooling load gives a better performance than the maximum \\dotΩ criterion for a larger asymmetry. Our results can be useful to choose a suitable optimization criterion for operating a real thermoelectric device with broken time-reversal symmetry.

  10. Comment and some questions on "Puzzles and the maximum effective moment (MEM) criterion in structural Geology"

    NASA Astrophysics Data System (ADS)

    Tong, Hengmao

    2012-03-01

    Zheng et al (Zheng and Wang, 2004; Zheng et al., 2011) proposed a new mechanism for ductile formation which is related to effective moment instead of shear stress, and the deformation zone develops along plane of maximum effective moment. The mathematical expression of maximum effective moment (The criterion of maximum effective moment, simplified as MEM criterion, Zheng and Wang, 2004; Zheng et al., 2011) is that Meff = 0.5 (σ1 - σ3) L sin2αsinα, where σ1 - σ3 is the yield strength of a material or rock, L is the unit length (of cleavage) in the σ1 direction, and α is the angle between σ1 and a certain plane. The effective moment reaches its maximum value when α is ±54.7° and deformation zones tend to appear in pairs with a conjugate angle of 2α, 109.4° facing to σ1. There is no remarkable Meff drop from the maximum values within the range of 54.7°±10°, where is favorable for the formation of ductile deformation zone. As a result, the origin of low-angle normal faults, high-angle reverse faults and certain types of conjugate strike-slip faults, which are incompatible with Mohr-Coulomb criterion, can be reasonably explained with MEM criterion (Zheng et al., 2011). Further more, lots of natural and experimental cases were found or collected to support the criterion.

  11. 12 CFR 221.7 - Supplement: Maximum loan value of margin stock and other collateral.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... value of margin stock and other collateral. (a) Maximum loan value of margin stock. The maximum loan... nonmargin stock and all other collateral. The maximum loan value of nonmargin stock and all other collateral... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Supplement: Maximum loan value of margin stock...

  12. In vitro marginal fit of three all-ceramic crown systems.

    PubMed

    Yeo, In-Sung; Yang, Jae-Ho; Lee, Jai-Bong

    2003-11-01

    Studies on marginal discrepancies of single restorations using various systems and materials have resulted in statistical inferences that are ambiguous because of small sample sizes and limited numbers of measurements per specimen. The purpose of this study was to compare the marginal adaptation of single anterior restorations made using different systems. The in vitro marginal discrepancies of 3 different all-ceramic crown systems (Celay In-Ceram, conventional In-Ceram, and IPS Empress 2 layering technique), and a control group of metal ceramic restorations were evaluated and compared by measuring the gap dimension between the crowns and the prepared tooth at the marginal opening. The crowns were made for 1 extracted maxillary central incisor prepared with a 1-mm shoulder margin and 6-degree tapered walls by milling. Thirty crowns per system were fabricated. Crown measurements were recorded with an optical microscope, with an accuracy of +/-0.1 microm, at 50 points spaced approximately 400 microm along the circumferential margin. The criterion of 120 microm was used as the maximum clinically acceptable marginal gap. Mean gap dimensions and standard deviations were calculated for marginal opening. The data were analyzed with a 1-way analysis of variance (alpha=.05). Mean gap dimensions and standard deviations at the marginal opening for the incisor crowns were 87 +/- 34 microm for control, 83 +/- 33 microm for Celay In-Ceram, 112 +/- 55 microm for conventional In-Ceram, and 46 +/- 16 microm for the IPS Empress 2 layering technique. Significant differences were found among the crown groups (P<.05). Compared with the control group, the IPS Empress 2 group had significantly smaller marginal discrepancies (P<.05), and the conventional In-Ceram group exhibited significantly greater marginal discrepancies (P<.05). There was no significant difference between the Celay In-Ceram and the control group. Within the limitations of this study, the marginal discrepancies were all within the clinically acceptable standard set at 120 microm. However, the IPS Empress 2 system showed the smallest and most homogeneous gap dimension, whereas the conventional In-Ceram system presented the largest and more variable gap dimension compared with the metal ceramic (control) restoration.

  13. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  14. Characterizing entanglement with global and marginal entropic measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio

    2003-12-01

    We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less

  15. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  16. INTEGRATION OF RELIABILITY WITH MECHANISTIC THERMALHYDRAULICS: REPORT ON APPROACH AND TEST PROBLEM RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. Schroeder; R. W. Youngblood

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less

  17. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  18. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  19. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  20. High-speed all-optical logic inverter based on stimulated Raman scattering in silicon nanocrystal.

    PubMed

    Sen, Mrinal; Das, Mukul K

    2015-11-01

    In this paper, we propose a new device architecture for an all-optical logic inverter (NOT gate), which is cascadable with a similar device. The inverter is based on stimulated Raman scattering in silicon nanocrystal waveguides, which are embedded in a silicon photonic crystal structure. The Raman response function of silicon nanocrystal is evaluated to explore the transfer characteristic of the inverter. A maximum product criterion for the noise margin is taken to analyze the cascadability of the inverter. The time domain response of the inverter, which explores successful inversion operation at 100 Gb/s, is analyzed. Propagation delay of the inverter is on the order of 5 ps, which is less than the delay in most of the electronic logic families as of today. Overall dimension of the device is around 755  μm ×15  μm, which ensures integration compatibility with the matured silicon industry.

  1. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Muscle Fiber Orientation Angle Dependence of the Tensile Fracture Behavior of Frozen Fish Muscle

    NASA Astrophysics Data System (ADS)

    Hagura, Yoshio; Okamoto, Kiyoshi; Suzuki, Kanichi; Kubota, Kiyoshi

    We have proposed a new cutting method for frozen fish named "cryo-cutting". This method applied tensile fracture force or bending fracture force to the frozen fish at appropriate low temperatures. In this paper, to clarify cryo-cutting mechanism, we analyzed tensile fracture behavior of the frozen fish muscle. In the analysis, the frozen fish muscle was considered unidirectionally fiber-reinforced composite material which consisted of fiber (muscle fiber) and matrix (connective tissue). Fracture criteria (maximum stress criterion, Tsai-Hill criterion) for the unidirectionally fiber-reinforced composite material were used. The following results were obtained: (1) By using Tsai-Hill criterion, muscle fiber orientation angle dependence of the tensile fracture stress could be calculated. (2) By using the maximum stress theory jointly with Tsai-Hill criterion, muscle fiber orientation angle dependence of the fracture mode of the frozen fish muscle could be estimated.

  3. Fatigue and Fracture-Toughness Characterization of SAW and SMA A537 Class I Ship-Steel Weldments.

    DTIC Science & Technology

    1981-12-01

    Charpy criterion and proposed NDT-DT criterion of Rolfe . Recommendations are made and further research is suggested to help clarify the assessment of...acceptable performance at -60aF. Likewise, at -60OF the NDT and DT data for these weldments marginally exceed the criteria proposed by Rolfe when the...exceed the CVN values equivalent to the 5/8 DT values required by Rolfe . The 5/8-inch dynamic-tear specimen is not recommended as a quality-control test

  4. Occurrence and distribution of fecal indicator bacteria, and physical and chemical indicators of water quality in streams receiving discharge from Dallas/Fort Worth International Airport and vicinity, North-Central Texas, 2008

    USGS Publications Warehouse

    Harwell, Glenn R.; Mobley, Craig A.

    2009-01-01

    This report, done by the U.S. Geological Survey in cooperation with Dallas/Fort Worth International (DFW) Airport in 2008, describes the occurrence and distribution of fecal indicator bacteria (fecal coliform and Escherichia [E.] coli), and the physical and chemical indicators of water quality (relative to Texas Surface Water Quality Standards), in streams receiving discharge from DFW Airport and vicinity. At sampling sites in the lower West Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts for five of the eight West Fork Trinity River watershed sampling sites exceeded the Texas Commission on Environmental Quality E. coli criterion, thus not fully supporting contact recreation. Two of the five sites with geometric means that exceeded the contact recreation criterion are airport discharge sites, which here means that the major fraction of discharge at those sites is from DFW Airport. At sampling sites in the Elm Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts exceeded the geometric mean contact recreation criterion for seven (four airport, three non-airport) of 13 sampling sites. Under low-flow conditions in the lower West Fork Trinity River watershed, E. coli counts for airport discharge sites were significantly different from (lower than) E. coli counts for non-airport sites. Under low-flow conditions in the Elm Fork Trinity River watershed, there was no significant difference between E. coli counts for airport sites and non-airport sites. During stormflow conditions, fecal indicator bacteria counts at the most downstream (integrator) sites in each watershed were considerably higher than counts at those two sites during low-flow conditions. When stormflow sample counts are included with low-flow sample counts to compute a geometric mean for each site, classification changes from fully supporting to not fully supporting contact recreation on the basis of the geometric mean contact recreation criterion. All water temperature measurements at sampling sites in the lower West Fork Trinity River watershed were less than the maximum criterion for water temperature for the lower West Fork Trinity segment. Of the measurements at sampling sites in the Elm Fork Trinity River watershed, 95 percent were less than the maximum criterion for water temperature for the Elm Fork Trinity River segment. All dissolved oxygen concentrations were greater than the minimum criterion for stream segments classified as exceptional aquatic life use. Nearly all pH measurements were within the pH criterion range for the classified segments in both watersheds, except for those at one airport site. For sampling sites in the lower West Fork Trinity River watershed, all annual average dissolved solids concentrations were less than the maximum criterion for the lower West Fork Trinity segment. For sampling sites in the Elm Fork Trinity River, nine of the 13 sites (six airport, three non-airport) had annual averages that exceeded the maximum criterion for that segment. For ammonia, 23 samples from 12 different sites had concentrations that exceeded the screening level for ammonia. Of these 12 sites, only one non-airport site had more than the required number of exceedances to indicate a screening level concern. Stormflow total suspended solids concentrations were significantly higher than low-flow concentrations at the two integrator sites. For sampling sites in the lower West Fork Trinity River watershed, all annual average chloride concentrations were less than the maximum annual average chloride concentration criterion for that segment. For the 13 sampling sites in the Elm Fork Trinity River watershed, one non-airport site had an annual average concentration that exceeded the maximum annual average chloride concentration criterion for that segment.

  5. Study on the criterion to determine the bottom deployment modes of a coilable mast

    NASA Astrophysics Data System (ADS)

    Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng

    2017-12-01

    A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.

  6. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    NASA Astrophysics Data System (ADS)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  7. Relative source allocation of TDI to drinking water for derivation of a criterion for chloroform: a Monte-Carlo and multi-exposure assessment.

    PubMed

    Niizuma, Shun; Matsui, Yoshihiko; Ohno, Koichi; Itoh, Sadahiko; Matsushita, Taku; Shirasaki, Nobutaka

    2013-10-01

    Drinking water quality standard (DWQS) criteria for chemicals for which there is a threshold for toxicity are derived by allocating a fraction of tolerable daily intake (TDI) to exposure from drinking water. We conducted physiologically based pharmacokinetic model simulations for chloroform and have proposed an equation for total oral-equivalent potential intake via three routes (oral ingestion, inhalation, and dermal exposures), the biologically effective doses of which were converted to oral-equivalent potential intakes. The probability distributions of total oral-equivalent potential intake in Japanese people were estimated by Monte Carlo simulations. Even when the chloroform concentration in drinking water equaled the current DWQS criterion, there was sufficient margin between the intake and the TDI: the probability that the intake exceeded TDI was below 0.1%. If a criterion that the 95th percentile estimate equals the TDI is regarded as both providing protection to highly exposed persons and leaving a reasonable margin of exposure relative to the TDI, then the chloroform drinking water criterion could be a concentration of 0.11mg/L. This implies a daily intake equal to 34% of the TDI allocated to the oral intake (2L/d) of drinking water for typical adults. For the highly exposed persons, inhalation exposure via evaporation from water contributed 53% of the total intake, whereas dermal absorption contributed only 3%. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. A New Multiaxial High-Cycle Fatigue Criterion Based on the Critical Plane for Ductile and Brittle Materials

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei

    2015-02-01

    An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.

  9. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    PubMed

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  10. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  11. Cold formability prediction by the modified maximum force criterion with a non-associated Hill48 model accounting for anisotropic hardening

    NASA Astrophysics Data System (ADS)

    Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.

    2016-08-01

    Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.

  12. Comparing hierarchical models via the marginalized deviance information criterion.

    PubMed

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Dynamical criterion for a marginally unstable, quasi-linear behavior in a two-layer model

    NASA Technical Reports Server (NTRS)

    Ebisuzaki, W.

    1988-01-01

    A two-layer quasi-geostrophic flow forced by meridional variations in heating can be in regimes ranging from radiative equilibrium to forced geostrophic turbulence. Between these extremes is a regime where the time-mean (zonal) flow is marginally unstable. Using scaling arguments, it is concluded that such a marginally unstable state should occur when a certain parameter, measuring the strength of wave-wave interactions relative to the beta effect and advection by the thermal wind, is small. Numerical simulations support this proposal. A transition from the marginally unstable regime to a more nonlinear regime is then examined through numerical simulations with different radiative forcings. It is found that transition is not caused by secondary instability of waves in the marginally unstable regime. Instead, the time-mean flow can support a number of marginally unstable normal modes. These normal modes interact with each other, and if they are of sufficient amplitude, the flow enters a more nonlinear regime.

  14. Tuition Discounting without Tears

    ERIC Educational Resources Information Center

    Martin, Robert E.

    2004-01-01

    This paper contains a policy model for tuition discounting that avoids the major financial pitfalls encountered in the administration of institutional scholarships. The dual objective of maximizing the funded scholarship discount rate and minimizing the unfunded discount rate is explained. Marginal cost pricing is the accepted criterion for…

  15. Cellular and dendritic growth in a binary melt - A marginal stability approach

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1986-01-01

    A simple model for the constrained growth of an array of cells or dendrites in a binary alloy in the presence of an imposed positive temperature gradient in the liquid is proposed, with the dendritic or cell tip radius calculated using the marginal stability criterion of Langer and Muller-Krumbhaar (1977). This approach, an approach adopting the ad hoc assumption of minimum undercooling at the cell or dendrite tip, and an approach based on the stability criterion of Trivedi (1980) all predict tip radii to within 30 percent of each other, and yield a simple relationship between the tip radius and the growth conditions. Good agreement is found between predictions and data obtained in a succinonitrile-acetone system, and under the present experimental conditions, the dendritic tip stability parameter value is found to be twice that obtained previously, possibly due to a transition in morphology from a cellular structure with just a few side branches, to a more fully developed dendritic structure.

  16. Continuum Mechanics at the Atomic Scale.

    DTIC Science & Technology

    1977-01-01

    an infinite hoop stress at the tip of the crack (Figure 9 ). Because of this singularity a perfectly good criterion of brittle fracture, the maximum...for brittle fracture, we will arrive at the Griffith criterion with the extra benefit that the Griffith constant is now fully determined. As a result...crack tip. From (5.9) it now follows that 2 2 2toZ - [a/2 C (v)] t = C (5.10) 0c Alas, this is the Griffith fracture criterion for brittle fracture with

  17. Clinical evaluation of two packable posterior composites: 2-year follow-up.

    PubMed

    Fagundes, T C; Barata, T J E; Bresciani, E; Cefaly, D F G; Jorge, M F F; Navarro, M F L

    2006-09-01

    The clinical performance of two packable posterior composites, Alert (A)-Jeneric/Pentron and SureFil (S)-Dentsply, was evaluated in 33 patients. Each patient received one A and one S restoration, resulting in a total of 66 restorations. The restorations were placed by one operator according to the manufacturer's specifications and were finished and polished after 1 week. Photographs were taken at baseline and after 2 years. Two independent evaluators conducted the clinical evaluation by using modified United States Public Health Service criteria. After 2 years, 60 restorations (30 A and 30 S), 27 class I (16 A and 11 S) and 33 class II (14 A and 19 S) were evaluated in 30 patients. Criterion A for recurrent caries, vitality, and retention was applicable to all 60 restorations. Criterion B was distributed among 40 restorations as follows: surface texture (15 A; 2 S), color (5 A; 6 S), postoperative sensitivity (1 S), marginal discoloration (8 A), marginal adaptation (3 A), and wear resistance (2 A). Data were analyzed using the Exact Fisher and McNemar tests. After 2 years, S showed a significantly better performance than A with respect to surface texture and marginal discoloration. The clinical performance of both materials was considered acceptable over the 2-year period. Further evaluations are necessary for a more in-depth analysis.

  18. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  19. The free growth criterion for grain initiation in TiB 2 inoculated γ-titanium aluminide based alloys

    NASA Astrophysics Data System (ADS)

    Gosslar, D.; Günther, R.

    2014-02-01

    γ-titanium aluminide (γ-TiAl) based alloys enable for the design of light-weight and high-temperature resistant engine components. This work centers on a numerical study of the condition for grain initiation during solidification of TiB2 inoculated γ-TiAl based alloys. Grain initiation is treated according to the so-called free growth criterion. This means that the free growth barrier for grain initiation is determined by the maximum interfacial mean curvature between a nucleus and the melt. The strategy presented in this paper relies on iteratively increasing the volume of a nucleus, which partially wets a hexagonal TiB2 crystal, minimizing the interfacial energy and calculating the corresponding interfacial curvature. The hereby obtained maximum curvature yields a scaling relation between the size of TiB2 crystals and the free growth barrier. Comparison to a prototypical TiB2 crystal in an as cast γ-TiAl based alloy allowed then to predict the free growth barrier prevailing under experimental conditions. The validity of the free growth criterion is discussed by an interfacial energy criterion.

  20. Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion

    NASA Astrophysics Data System (ADS)

    Uenishi, K.; Yamachi, H.

    2017-12-01

    As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the failure envelope of the Mohr-Coulomb criterion that describes shear-related rupture. The critical value of the volumetric strain for rupture may be controlled by the apparent cohesion and apparent angle of internal friction of the Mohr-Coulomb criterion.

  1. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118

  2. Destructive examination of shipping package 9975-02644

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daugherty, W. L.

    Destructive and non-destructive examinations have been performed on the components of shipping package 9975-02644 as part of a comprehensive SRS surveillance program for plutonium material stored in the K-Area Complex (KAC). During the field surveillance inspection of this package in KAC, three non-conforming conditions were noted: the axial gap of 1.389 inch exceeded the 1 inch maximum criterion, the exposed height of the lead shield was greater than the 4.65 inch maximum criterion, and the difference between the upper assembly inside height and the exposed height of the lead shield was less than the 0.425 inch minimum criterion. All threemore » of these observations relate to axial shrinkage of the lower fiberboard assembly. In addition, liquid water (condensation) was observed on the interior of the drum lid, the thermal blanket and the air shield.« less

  3. [Acoustic conditions in open plan offices - Pilot test results].

    PubMed

    Mikulski, Witold

    The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  4. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  5. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  6. Fretting Fatigue with Cylindrical-On-Flat Contact: Crack Nucleation, Crack Path and Fatigue Life

    PubMed Central

    Noraphaiphipaksa, Nitikorn; Manonukul, Anchalee; Kanchanomai, Chaosuan

    2017-01-01

    Fretting fatigue experiments and finite element analysis were carried out to investigate the influence of cylindrical-on-flat contact on crack nucleation, crack path and fatigue life of medium-carbon steel. The location of crack nucleation was predicted using the maximum shear stress range criterion and the maximum relative slip amplitude criterion. The prediction using the maximum relative slip amplitude criterion gave the better agreement with the experimental result, and should be used for the prediction of the location of crack nucleation. Crack openings under compressive bulk stresses were found in the fretting fatigues with flat-on-flat contact and cylindrical-on-flat contacts, i.e., fretting-contact-induced crack openings. The crack opening stress of specimen with flat-on-flat contact was lower than those of specimens with cylindrical-on-flat contacts, while that of specimen with 60-mm radius contact pad was lower than that of specimen with 15-mm radius contact pad. The fretting fatigue lives were estimated by integrating the fatigue crack growth curve from an initial propagating crack length to a critical crack length. The predictions of fretting fatigue life with consideration of crack opening were in good agreement with the experimental results. PMID:28772522

  7. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  8. [On the problems of the evolutionary optimization of life history. II. To justification of optimization criterion for nonlinear Leslie model].

    PubMed

    Pasekov, V P

    2013-03-01

    The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.

  9. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  10. Corrosion Fatigue Crack Growth Behavior at Notched Hole in 7075-T6 Under Biaxial and Uniaxial Fatigue with Different Phases

    DTIC Science & Technology

    2015-09-17

    defined by the angle = ∗ at which the stress component σθθ(θ) takes the maximum value, according to Erdogan and Sih [10] (Figure 2.7). Thus, to...find the crack propagation direction according to Erdogan and Sih criterion, the following equation is used: 24 � =∗ = 0 (2.27...becomes: Erdogan and Sih criterion: ∗ = −2 KII KI + 4.6667 � KII KI � 3 + ⋯ (2.36) Sih Cha criterion: ∗ = −2 KII KI + 8.7271 � KII KI � 3

  11. Maximum projection designs for computer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  12. Maximum projection designs for computer experiments

    DOE PAGES

    Joseph, V. Roshan; Gul, Evren; Ba, Shan

    2015-03-18

    Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less

  13. A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie

    Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less

  14. FDI World Dental Federation - clinical criteria for the evaluation of direct and indirect restorations. Update and clinical examples.

    PubMed

    Hickel, Reinhard; Peschke, Arnd; Tyas, Martin; Mjör, Ivar; Bayne, Stephen; Peters, Mathilde; Hiller, Karl-Anton; Randall, Ross; Vanherle, Guido; Heintze, Siegward D

    2010-08-01

    In 2007, new clinical criteria were approved by the FDI World Dental Federation and simultaneously published in three dental journals. The criteria were categorized into three groups: esthetic parameters (four criteria), functional parameters (six criteria), and biological parameters (six criteria). Each criterion can be expressed with five scores, three for acceptable and two for non-acceptable (one for reparable and one for replacement). The criteria have been used in several clinical studies since 2007, and the resulting experience in their application has led to a requirement to modify some of the criteria and scores. The two major alterations involve staining and approximal contacts. As staining of the margins and the surface have different causes, both phenomena do not appear simultaneously. Thus, staining has been differentiated into marginal staining and surface staining. The approximal contact now appears under the name "approximal anatomic form" as the approximal contour is a specific, often non-esthetic issue that cannot be integrated into the criterion "esthetic anatomical form". In 2008, a web-based training and calibration tool called e-calib (www.e-calib.info) was made available. Clinical investigators and other research workers can train and calibrate themselves interactively by assessing clinical cases of posterior restorations, which are presented as high quality pictures. Currently, about 300 clinical cases are included in the database which is regularly updated. Training for 8 of the 16 clinical criteria is available in the program: "Surface luster"; "Staining (surface, margins)"; "Color match and translucency"; "Esthetic anatomical form"; "Fracture of material and retention"; "Marginal adaptation"; "Recurrence of caries, erosion, abfraction"; and "Tooth integrity (enamel cracks, tooth fractures)". Typical clinical cases are presented for each of these eight criteria and their corresponding five scores.

  15. Career Performance of Marginally Scholastic Graduates of the Air Force Institute of Technology’s Resident Master’s Degree Programs

    DTIC Science & Technology

    1984-09-01

    between graduate grade point average (GGPA) and various measures of career performance. Most of the research has dealt with graduates of business ... schools and the most frequently measured criterion of career performance is compensation in the form of earnings and salary. Some researchers have found

  16. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  17. Robustness analysis of multirate and periodically time varying systems

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.

  18. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  19. Optical ages indicate the southwestern margin of the Green Bay Lobe in Wisconsin, USA, was at its maximum extent until about 18,500 years ago

    USGS Publications Warehouse

    Attig, J.W.; Hanson, P.R.; Rawling, J.E.; Young, A.R.; Carson, E.C.

    2011-01-01

    Samples for optical dating were collected to estimate the time of sediment deposition in small ice-marginal lakes in the Baraboo Hills of Wisconsin. These lakes formed high in the Baraboo Hills when drainage was blocked by the Green Bay Lobe when it was at or very near its maximum extent. Therefore, these optical ages provide control for the timing of the thinning and recession of the Green Bay Lobe from its maximum position. Sediment that accumulated in four small ice-marginal lakes was sampled and dated. Difficulties with field sampling and estimating dose rates made the interpretation of optical ages derived from samples from two of the lake basins problematic. Samples from the other two lake basins-South Bluff and Feltz basins-responded well during laboratory analysis and showed reasonably good agreement between the multiple ages produced at each site. These ages averaged 18.2. ka (n= 6) and 18.6. ka (n= 6), respectively. The optical ages from these two lake basins where we could carefully select sediment samples provide firm evidence that the Green Bay Lobe stood at or very near its maximum extent until about 18.5. ka.The persistence of ice-marginal lakes in these basins high in the Baraboo Hills indicates that the ice of the Green Bay Lobe had not experienced significant thinning near its margin prior to about 18.5. ka. These ages are the first to directly constrain the timing of the maximum extent of the Green Bay Lobe and the onset of deglaciation in the area for which the Wisconsin Glaciation was named. ?? 2011 Elsevier B.V.

  20. Greenland ice sheet retreat since the Little Ice Age

    NASA Astrophysics Data System (ADS)

    Beitch, Marci J.

    Late 20th century and 21st century satellite imagery of the perimeter of the Greenland Ice Sheet (GrIS) provide high resolution observations of the ice sheet margins. Examining changes in ice margin positions over time yield measurements of GrIS area change and rates of margin retreat. However, longer records of ice sheet margin change are needed to establish more accurate predictions of the ice sheet's future response to global conditions. In this study, the trimzone, the area of deglaciated terrain along the ice sheet edge that lacks mature vegetation cover, is used as a marker of the maximum extent of the ice from its most recent major advance during the Little Ice Age. We compile recently acquired Landsat ETM+ scenes covering the perimeter of the GrIS on which we map area loss on land-, lake-, and marine-terminating margins. We measure an area loss of 13,327 +/- 830 km2, which corresponds to 0.8% shrinkage of the ice sheet. This equates to an averaged horizontal retreat of 363 +/- 69 m across the entire GrIS margin. Mapping the areas exposed since the Little Ice Age maximum, circa 1900 C.E., yields a century-scale rate of change. On average the ice sheet lost an area of 120 +/- 16 km 2/yr, or retreated at a rate of 3.3 +/- 0.7 m/yr since the LIA maximum.

  1. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  2. Incremental Criterion Validity of the WJ-III COG Clinical Clusters: Marginal Predictive Effects beyond the General Factor

    ERIC Educational Resources Information Center

    McGill, Ryan J.

    2015-01-01

    The current study examined the incremental validity of the clinical clusters from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ-III COG) for predicting scores on the Woodcock-Johnson III Tests of Achievement (WJ-III ACH). All participants were children and adolescents (N = 4,722) drawn from the nationally representative WJ-III…

  3. Varying the valuating function and the presentable bank in computerized adaptive testing.

    PubMed

    Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio

    2011-05-01

    In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.

  4. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    NASA Astrophysics Data System (ADS)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  5. Impact of abutment rotation and angulation on marginal fit: theoretical considerations.

    PubMed

    Semper, Wiebke; Kraft, Silvan; Mehrhof, Jurgen; Nelson, Katja

    2010-01-01

    Rotational freedom of various implant positional index designs has been previously calculated. To investigate its clinical relevance, a three-dimensional simulation was performed to demonstrate the influence of rotational displacements of the abutment on the marginal fit of prosthetic superstructures. Idealized abutments with different angulations (0, 5, 10, 15, and 20 degrees) were virtually constructed (SolidWorks Office Premium 2007). Then, rotational displacement was simulated with various degrees of rotational freedom (0.7, 0.95, 1.5, 1.65, and 1.85 degrees). The resulting horizontal displacement of the abutment from the original position was quantified in microns, followed by a simulated pressure-less positioning of superstructures with defined internal gaps (5 µm, 60 µm, and 100 µm). The resulting marginal gap between the abutment and the superstructure was measured vertically with the SolidWorks measurement tool. Rotation resulted in a displacement of the abutment of up to 157 µm at maximum rotation and angulation. Interference of a superstructure with a defined internal gap of 5 µm placed on the abutment resulted in marginal gaps up to 2.33 mm at maximum rotation and angulation; with a 60-µm internal gap, the marginal gaps reached a maximum of 802 µm. Simulation using a superstructure with an internal gap of 100 µm revealed a marginal gap of 162 µm at abutment angulation of 20 degrees and rotation of 1.85 degrees. The marginal gaps increased with the degree of abutment angulation and the extent of rotational freedom. Rotational displacement of the abutment influenced prosthesis misfit. The marginal gaps between the abutment and the superstructure increased with the rotational freedom of the index and the angulation of the abutment.

  6. Physical employment standards for U.K. fire and rescue service personnel.

    PubMed

    Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L

    2016-01-01

    Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Shape selection criterion for cellular array during constrained growth of binary alloys - Need for low gravity experiment

    NASA Technical Reports Server (NTRS)

    Tewari, Surendra N.; Trivedi, Rohit

    1991-01-01

    Development of steady-state periodic cellular array is one of the critical problems in the study of nonlinear pattern formation during directional solidification of binary alloys. The criterion which establishes the values of cell tip radius and spacing under given growth condition is not known. Theoretical models, such as marginal stability and microscopic solvability, have been developed for purely diffusive regime. However, the experimental conditions where cellular structures are stable are precisely the ones where the convection effects are predominant. Thus, the critical data for meaningful evaluation of cellular array growth models can only be obtained by partial directional solidification and quenching experiments carried out in the low gravity environment of space.

  8. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A passivity criterion for sampled-data bilateral teleoperation systems.

    PubMed

    Jazayeri, Ali; Tavakoli, Mahdi

    2013-01-01

    A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.

  10. Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion

    NASA Astrophysics Data System (ADS)

    Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan

    2016-08-01

    The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.

  11. The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine.

    PubMed

    Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong

    2012-03-01

    Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. 76 FR 36613 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... IACS unified interpretations. --Development of amendments to the criterion for maximum angle of heel in... request reasonable accommodation, those who plan to attend should contact the meeting coordinator, LCDR...

  13. 77 FR 70525 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... unified interpretations --Development of amendments to the criterion for maximum angle of heel in turns of... accommodation, those who plan to attend should contact the meeting coordinator, LCDR Catherine Phillips, by...

  14. Validity and extension of the SCS-CN method for computing infiltration and rainfall-excess rates

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-12-01

    A criterion is developed for determining the validity of the Soil Conservation Service curve number (SCS-CN) method. According to this criterion, the existing SCS-CN method is found to be applicable when the potential maximum retention, S, is less than or equal to twice the total rainfall amount. The criterion is tested using published data of two watersheds. Separating the steady infiltration from capillary infiltration, the method is extended for predicting infiltration and rainfall-excess rates. The extended SCS-CN method is tested using 55 sets of laboratory infiltration data on soils varying from Plainfield sand to Yolo light clay, and the computed and observed infiltration and rainfall-excess rates are found to be in good agreement.

  15. Optimal low symmetric dissipation Carnot engines and refrigerators

    NASA Astrophysics Data System (ADS)

    de Tomás, C.; Hernández, A. Calvo; Roco, J. M. M.

    2012-01-01

    A unified optimization criterion for Carnot engines and refrigerators is proposed. It consists of maximizing the product of the heat absorbed by the working system times the efficiency per unit time of the device, either the engine or the refrigerator. This criterion can be applied to both low symmetric dissipation Carnot engines and refrigerators. For engines the criterion coincides with the maximum power criterion and then the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered, where Th and Tc are the temperatures of the hot and cold reservoirs, respectively [Esposito, Kawai, Lindenberg, and Van den Broeck, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)]. For refrigerators the criterion provides the counterpart of Curzon-Ahlborn efficiency for refrigerators ɛCA=[1/(1-(Tc/Th)]-1, first derived by Yan and Chen for the particular case of an endoreversible Carnot-type refrigerator with linear (Newtonian) finite heat transfer laws [Yan and Chen, J. Phys. D: Appl. Phys.JPAPBE0022-372710.1088/0022-3727/23/2/002 23, 136 (1990)].

  16. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  17. Modelling wave-induced sea ice break-up in the marginal ice zone

    NASA Astrophysics Data System (ADS)

    Montiel, F.; Squire, V. A.

    2017-10-01

    A model of ice floe break-up under ocean wave forcing in the marginal ice zone (MIZ) is proposed to investigate how floe size distribution (FSD) evolves under repeated wave break-up events. A three-dimensional linear model of ocean wave scattering by a finite array of compliant circular ice floes is coupled to a flexural failure model, which breaks a floe into two floes provided the two-dimensional stress field satisfies a break-up criterion. A closed-feedback loop algorithm is devised, which (i) solves the wave-scattering problem for a given FSD under time-harmonic plane wave forcing, (ii) computes the stress field in all the floes, (iii) fractures the floes satisfying the break-up criterion, and (iv) generates an updated FSD, initializing the geometry for the next iteration of the loop. The FSD after 50 break-up events is unimodal and near normal, or bimodal, suggesting waves alone do not govern the power law observed in some field studies. Multiple scattering is found to enhance break-up for long waves and thin ice, but to reduce break-up for short waves and thick ice. A break-up front marches forward in the latter regime, as wave-induced fracture weakens the ice cover, allowing waves to travel deeper into the MIZ.

  18. Modelling wave-induced sea ice break-up in the marginal ice zone

    PubMed Central

    Squire, V. A.

    2017-01-01

    A model of ice floe break-up under ocean wave forcing in the marginal ice zone (MIZ) is proposed to investigate how floe size distribution (FSD) evolves under repeated wave break-up events. A three-dimensional linear model of ocean wave scattering by a finite array of compliant circular ice floes is coupled to a flexural failure model, which breaks a floe into two floes provided the two-dimensional stress field satisfies a break-up criterion. A closed-feedback loop algorithm is devised, which (i) solves the wave-scattering problem for a given FSD under time-harmonic plane wave forcing, (ii) computes the stress field in all the floes, (iii) fractures the floes satisfying the break-up criterion, and (iv) generates an updated FSD, initializing the geometry for the next iteration of the loop. The FSD after 50 break-up events is unimodal and near normal, or bimodal, suggesting waves alone do not govern the power law observed in some field studies. Multiple scattering is found to enhance break-up for long waves and thin ice, but to reduce break-up for short waves and thick ice. A break-up front marches forward in the latter regime, as wave-induced fracture weakens the ice cover, allowing waves to travel deeper into the MIZ. PMID:29118659

  19. Modelling wave-induced sea ice break-up in the marginal ice zone.

    PubMed

    Montiel, F; Squire, V A

    2017-10-01

    A model of ice floe break-up under ocean wave forcing in the marginal ice zone (MIZ) is proposed to investigate how floe size distribution (FSD) evolves under repeated wave break-up events. A three-dimensional linear model of ocean wave scattering by a finite array of compliant circular ice floes is coupled to a flexural failure model, which breaks a floe into two floes provided the two-dimensional stress field satisfies a break-up criterion. A closed-feedback loop algorithm is devised, which (i) solves the wave-scattering problem for a given FSD under time-harmonic plane wave forcing, (ii) computes the stress field in all the floes, (iii) fractures the floes satisfying the break-up criterion, and (iv) generates an updated FSD, initializing the geometry for the next iteration of the loop. The FSD after 50 break-up events is unimodal and near normal, or bimodal, suggesting waves alone do not govern the power law observed in some field studies. Multiple scattering is found to enhance break-up for long waves and thin ice, but to reduce break-up for short waves and thick ice. A break-up front marches forward in the latter regime, as wave-induced fracture weakens the ice cover, allowing waves to travel deeper into the MIZ.

  20. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  1. 46 CFR 170.173 - Criterion for vessels of unusual proportion and form.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the maximum righting arm occurs at an angle of heel less than or equal to 30 degrees; or (2) Paragraph (b) of this section if the maximum righting arm occurs at an angle of heel greater than 30 degrees...); (2) A righting arm (GZ) of at least 0.66 feet (0.20 meters) at an angle of heel equal to or greater...

  2. 77 FR 22057 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... interpretations --Development of amendments to the criterion for maximum angle of heel in turns of the 2008 IS... to request reasonable accommodation, those who plan to attend should contact the meeting coordinator...

  3. 76 FR 70529 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... criterion for maximum angle of heel in turns of the 2008 IS Code; Development of amendments to SOLAS... plan to attend should contact the meeting coordinator, LCDR Catherine Phillips, by email at Catherine.A...

  4. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  5. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  6. Blunt Criterion trauma model for head and chest injury risk assessment of cal. 380 R and cal. 22 long blank cartridge actuated gundog retrieval devices.

    PubMed

    Frank, Matthias; Bockholdt, Britta; Peters, Dieter; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Hinz, Peter

    2011-05-20

    Blunt ballistic impact trauma is a current research topic due to the widespread use of kinetic energy munitions in law enforcement. In the civilian setting, an automatic dummy launcher has recently been identified as source of blunt impact trauma. However, there is no data on the injury risk of conventional dummy launchers. It is the aim of this investigation to predict potential impact injury to the human head and chest on the basis of the Blunt Criterion which is an energy based blunt trauma model to assess vulnerability to blunt weapons, projectile impacts, and behind-armor-exposures. Based on experimentally investigated kinetic parameters, the injury risk of two commercially available gundog retrieval devices (Waidwerk Telebock, Germany; Turner Richards, United Kingdom) was assessed using the Blunt Criterion trauma model for blunt ballistic impact trauma to the head and chest. Assessing chest impact, the Blunt Criterion values for both shooting devices were higher than the critical Blunt Criterion value of 0.37, which represents a 50% risk of sustaining a thoracic skeletal injury of AIS 2 (moderate injury) or AIS 3 (serious injury). The maximum Blunt Criterion value (1.106) was higher than the Blunt Criterion value corresponding to AIS 4 (severe injury). With regard to the impact injury risk to the head, both devices surpass by far the critical Blunt Criterion value of 1.61, which represents a 50% risk of skull fracture. Highest Blunt Criterion values were measured for the Turner Richards Launcher (2.884) corresponding to a risk of skull fracture of higher than 80%. Even though the classification as non-guns by legal authorities might implicate harmlessness, the Blunt Criterion trauma model illustrates the hazardous potential of these shooting devices. The Blunt Criterion trauma model links the laboratory findings to the impact injury patterns of the head and chest that might be expected. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Comparative assessment of marginal accuracy of grade II titanium and Ni–Cr alloy before and after ceramic firing: An in vitro study

    PubMed Central

    Patil, Abhijit; Singh, Kishan; Sahoo, Sukant; Suvarna, Suraj; Kumar, Prince; Singh, Anupam

    2013-01-01

    Objective: The aims of the study are to assess the marginal accuracy of base metal and titanium alloy casting and to evaluate the effect of repeated ceramic firing on the marginal accuracy of base metal and titanium alloy castings. Materials and Methods: Twenty metal copings were fabricated with each casting material. Specimens were divided into 4 groups of 10 each representing base metal alloys castings without (Group A) and with metal shoulder margin (Group B), titanium castings without (Group C) and with metal shoulder margin (Group D). The measurement of fit of the metal copings was carried out before the ceramic firing at four different points and the same was followed after porcelain build-up. Results: Significant difference was found when Ni–Cr alloy samples were compared with Grade II titanium samples both before and after ceramic firings. The titanium castings with metal shoulder margin showed highest microgap among all the materials tested. Conclusions: Based on the results that were found and within the limitations of the study design, it can be concluded that there is marginal discrepancy in the copings made from Ni–Cr and Grade II titanium. This marginal discrepancy increased after ceramic firing cycles for both Ni–Cr and Grade II titanium. The comparative statistical analysis for copings with metal-collar showed maximum discrepancy for Group D. The comparative statistical analysis for copings without metal-collar showed maximum discrepancy for Group C. PMID:24926205

  8. SU-F-J-25: Position Monitoring for Intracranial SRS Using BrainLAB ExacTrac Snap Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, S; McCaw, T; Huq, M

    2016-06-15

    Purpose: To determine the accuracy of position monitoring with BrainLAB ExacTrac snap verification following couch rotations during intracranial SRS. Methods: A CT scan of an anthropomorphic head phantom was acquired using 1.25mm slices. The isocenter was positioned near the centroid of the frontal lobe. The head phantom was initially aligned on the treatment couch using cone-beam CT, then repositioned using ExacTrac x-ray verification with residual errors less than 0.2mm and 0.2°. Snap verification was performed over the full range of couch angles in 15° increments with known positioning offsets of 0–3mm applied to the phantom along each axis. At eachmore » couch angle, the smallest tolerance was determined for which no positioning deviation was detected. Results: For couch angles 30°–60° from the center position, where the longitudinal axis of the phantom is approximately aligned with the beam axis of one x-ray tube, snap verification consistently detected positioning errors exceeding the maximum 8mm tolerance. Defining localization error as the difference between the known offset and the minimum tolerance for which no deviation was detected, the RMS error is mostly less than 1mm outside of couch angles 30°–60° from the central couch position. Given separate measurements of patient position from the two imagers, whether to proceed with treatment can be determined by the criterion of a reading within tolerance from just one (OR criterion) or both (AND criterion) imagers. Using a positioning tolerance of 1.5mm, snap verification has sensitivity and specificity of 94% and 75%, respectively, with the AND criterion, and 67% and 93%, respectively, with the OR criterion. If readings exceeding maximum tolerance are excluded, the sensitivity and specificity are 88% and 86%, respectively, with the AND criterion. Conclusion: With a positioning tolerance of 1.5mm, ExacTrac snap verification can be used during intracranial SRS with sensitivity and specificity between 85% and 90%.« less

  9. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    PubMed

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included in the strategy are (i) safety profile of the reagents, (ii) the starting amount of the process reagents used in the manufacturing process, (iii) the maximum dose of the product, and (iv) the point of introduction of the process reagents in the process. The implementation of the risk-based strategy can eliminate clearance testing for approximately 90% of the process reagents used in the manufacturing processes. This science-based strategy allows us to ensure patient safety and meet regulatory agency expectations throughout the product development life cycle. © PDA, Inc. 2015.

  10. Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1995-01-01

    An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.

  11. Clinical Effectiveness of a Resin-modified Glass Ionomer Cement and a Mild One-step Self-etch Adhesive Applied Actively and Passively in Noncarious Cervical Lesions: An 18-Month Clinical Trial.

    PubMed

    Jassal, M; Mittal, S; Tewari, S

    2018-05-21

    To evaluate the clinical effectiveness of two methods of application of a mild one-step self-etch adhesive and composite resin as compared with a resin-modified glass ionomer cement (RMGIC) control restoration in noncarious cervical lesions (NCCLs). A total of 294 restorations were placed in 56 patients, 98 in each one of the following groups: 1) G-Bond active application combined with Solare-X composite resin (A-1SEA), 2) G-Bond passive application combined with Solare-X composite resin (P-1SEA), and 3) GC II LC RMGIC. The restorations were evaluated at baseline and after six, 12, and 18 months according to the FDI criteria for fractures/retention, marginal adaptation, marginal staining, postoperative sensitivity, and secondary caries. Cumulative failure rates were calculated for each criterion at each recall period. The effect of adhesive, method of application, and recall period were assessed. The Kruskal-Wallis test for intergroup comparison and Friedman and Wilcoxon signed ranks tests for intragroup comparison were used for each criterion ( α=0.05). The retention rates at 18 months were 93.26% for the A-1SEA group, 86.21% for the P-1SEA group, and 90.91% for the RMGIC group. The active application improved the retention rates compared with the passive application of mild one-step self-etch adhesive; however, no statistically significant difference was observed between the groups. Marginal staining was observed in 13 restorations (1 in A-1SEA, 4 in P-1SEA, and 8 in RMGIC) with no significant difference between the groups. The RMGIC group showed a significant increase in marginal staining at 12 and 18 months from the baseline. There was no significant difference between the groups for marginal adaptation, secondary caries, or postoperative sensitivity. Within the limitations of the study, we can conclude that mild one-step self-etch adhesive followed by a resin composite restoration can be an alternative to RMGIC with similar retention and improved esthetics in restoration of NCCLs. Agitation could possibly benefit the clinical performance of mild one-step self-etch adhesives, but this study did not confirm that the observed benefit was statistically significant.

  12. Probabilistic margin evaluation on accidental transients for the ASTRID reactor project

    NASA Astrophysics Data System (ADS)

    Marquès, Michel

    2014-06-01

    ASTRID is a technological demonstrator of Sodium cooled Fast Reactor (SFR) under development. The conceptual design studies are being conducted in accordance with the Generation IV reactor objectives, particularly in terms of improving safety. For the hypothetical events, belonging to the accidental category "severe accident prevention situations" having a very low frequency of occurrence, the safety demonstration is no more based on a deterministic demonstration with conservative assumptions on models and parameters but on a "Best-Estimate Plus Uncertainty" (BEPU) approach. This BEPU approach ispresented in this paper for an Unprotected Loss-of-Flow (ULOF) event. The Best-Estimate (BE) analysis of this ULOFt ransient is performed with the CATHARE2 code, which is the French reference system code for SFR applications. The objective of the BEPU analysis is twofold: first evaluate the safety margin to sodium boiling in taking into account the uncertainties on the input parameters of the CATHARE2 code (twenty-two uncertain input parameters have been identified, which can be classified into five groups: reactor power, accident management, pumps characteristics, reactivity coefficients, thermal parameters and head losses); secondly quantify the contribution of each input uncertainty to the overall uncertainty of the safety margins, in order to refocusing R&D efforts on the most influential factors. This paper focuses on the methodological aspects of the evaluation of the safety margin. At least for the preliminary phase of the project (conceptual design), a probabilistic criterion has been fixed in the context of this BEPU analysis; this criterion is the value of the margin to sodium boiling, which has a probability 95% to be exceeded, obtained with a confidence level of 95% (i.e. the M5,95percentile of the margin distribution). This paper presents two methods used to assess this percentile: the Wilks method and the Bootstrap method ; the effectiveness of the two methods is compared on the basis of 500 simulations performed with theCATHARE2 code. We conclude that, with only 100 simulations performed with the CATHARE2 code, which is a number of simulations workable in the conceptual design phase of the ASTRID project where the models and the hypothesis are often modified, it is best in order to evaluate the percentile M5,95 of the margin to sodium boiling to use the bootstrap method, which will provide a slightly conservative result. On the other hand, in order to obtain an accurate estimation of the percentileM5,95, for the safety report for example, it will be necessary to perform at least 300 simulations with the CATHARE2 code. In this case, both methods (Wilks and Bootstrap) would give equivalent results.

  13. The upper cretaceous snake Dinilysia patagonica Smith-Woodward, 1901, and the crista circumfenestralis of snakes.

    PubMed

    Palci, Alessandro; Caldwell, Michael W

    2014-10-01

    Studies on the phylogenetic relationships of snakes and lizards are plagued by problematic characterizations of anatomy that are then used to define characters and states in taxon-character matrices. State assignments and character descriptions must be clear characterizations of observable anatomy and topological relationships if homologies are to be hypothesized. A supposed homology among snakes, not observed in lizards, is the presence of a crista circumfenestralis (CCF), a system of bony crests surrounding the fenestra ovalis and lateral aperture of the recessus scalae tympani. We note that there are some fossil and extant snakes that lack a CCF, and some extant lizards that possess a morphological equivalent. The phylogenetically important upper Cretaceous fossil snake Dinilysia patagonica has been interpreted by different authors as either having or lacking a CCF. These conflicting results for Dinilysia were tested by re-examining the morphology of the otic region in a large sample of snakes and lizards. An unambiguous criterion arising from the test of topology is used to define the presence of a CCF: the enclosure of the ventral margin of the juxtastapedial recess by flanges of the otoccipital (crista tuberalis and crista interfenestralis) that extend forward to contact the posterior margin of the prootic. According to this criterion D. patagonica does not possess a CCF, therefore, this anatomical feature must have arisen later during the evolution of snakes. Copyright © 2014 Wiley Periodicals, Inc.

  14. Bioeconomic Sustainability of Cellulosic Biofuel Production on Marginal Lands

    ERIC Educational Resources Information Center

    Gutierrez, Andrew Paul; Ponti, Luigi

    2009-01-01

    The use of marginal land (ML) for lignocellulosic biofuel production is examined for system stability, resilience, and eco-social sustainability. A North American prairie grass system and its industrialization for maximum biomass production using biotechnology and agro-technical inputs is the focus of the analysis. Demographic models of ML biomass…

  15. Maximum magnitude of injection-induced earthquakes: A criterion to assess the influence of pressure migration along faults

    NASA Astrophysics Data System (ADS)

    Norbeck, Jack H.; Horne, Roland N.

    2018-05-01

    The maximum expected earthquake magnitude is an important parameter in seismic hazard and risk analysis because of its strong influence on ground motion. In the context of injection-induced seismicity, the processes that control how large an earthquake will grow may be influenced by operational factors under engineering control as well as natural tectonic factors. Determining the relative influence of these effects on maximum magnitude will impact the design and implementation of induced seismicity management strategies. In this work, we apply a numerical model that considers the coupled interactions of fluid flow in faulted porous media and quasidynamic elasticity to investigate the earthquake nucleation, rupture, and arrest processes for cases of induced seismicity. We find that under certain conditions, earthquake ruptures are confined to a pressurized region along the fault with a length-scale that is set by injection operations. However, earthquakes are sometimes able to propagate as sustained ruptures outside of the zone that experienced a pressure perturbation. We propose a faulting criterion that depends primarily on the state of stress and the earthquake stress drop to characterize the transition between pressure-constrained and runaway rupture behavior.

  16. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turley, Jessica; Claridge Mackonis, Elizabeth

    To evaluate in-field megavoltage (MV) imaging of simultaneously integrated boost (SIB) breast fields to determine its feasibility in treatment verification for the SIB breast radiotherapy technique, and to assess whether the current-imaging protocol and treatment margins are sufficient. For nine patients undergoing SIB breast radiotherapy, in-field MV images of the SIB fields were acquired on days that regular treatment verification imaging was performed. The in-field images were matched offline according to the scar wire on digitally reconstructed radiographs. The offline image correction results were then applied to a margin recipe formula to calculate safe margins that account for random andmore » systematic uncertainties in the position of the boost volume when an offline correction protocol has been applied. After offline assessment of the acquired images, 96% were within the tolerance set in the current department-imaging protocol. Retrospectively performing the maximum position deviations on the Eclipse™ treatment planning system demonstrated that the clinical target volume (CTV) boost received a minimum dose difference of 0.4% and a maximum dose difference of 1.4% less than planned. Furthermore, applying our results to the Van Herk margin formula to ensure that 90% of patients receive 95% of the prescribed dose, the calculated CTV margins were comparable to the current departmental procedure used. Based on the in-field boost images acquired and the feasible application of these results to the margin formula the current CTV-planning target volume margins used are appropriate for the accurate treatment of the SIB boost volume without additional imaging.« less

  18. Optimal allocation of bulk water supplies to competing use sectors based on economic criterion - An application to the Chao Phraya River Basin, Thailand

    NASA Astrophysics Data System (ADS)

    Divakar, L.; Babel, M. S.; Perret, S. R.; Gupta, A. Das

    2011-04-01

    SummaryThe study develops a model for optimal bulk allocations of limited available water based on an economic criterion to competing use sectors such as agriculture, domestic, industry and hydropower. The model comprises a reservoir operation module (ROM) and a water allocation module (WAM). ROM determines the amount of water available for allocation, which is used as an input to WAM with an objective function to maximize the net economic benefits of bulk allocations to different use sectors. The total net benefit functions for agriculture and hydropower sectors and the marginal net benefit from domestic and industrial sectors are established and are categorically taken as fixed in the present study. The developed model is applied to the Chao Phraya basin in Thailand. The case study results indicate that the WAM can improve net economic returns compared to the current water allocation practices.

  19. Dosimetric evaluation of planning target volume margin reduction for prostate cancer via image-guided intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-07-01

    The aim of this study was to quantitatively estimate the dosimetric benefits of the image-guided radiation therapy (IGRT) system for the prostate intensity-modulated radiation therapy (IMRT) delivery. The cases of eleven patients who underwent IMRT for prostate cancer without a prostatectomy at our institution between October 2012 and April 2014 were retrospectively analyzed. For every patient, clinical target volume (CTV) to planning target volume (PTV) margins were uniformly used: 3 mm, 5 mm, 7 mm, 10 mm, 12 mm, and 15 mm. For each margin size, the IMRT plans were independently optimized by one medical physicist using Pinnalce3 (ver. 8.0.d, Philips Medical System, Madison, WI) in order to maintain the plan quality. The maximum geometrical margin (MGM) for every CT image set, defined as the smallest margin encompassing the rectum at least at one slice, was between 13 mm and 26 mm. The percentage rectum overlapping PTV (%V ROV ), the rectal normal tissue complication probability (NTCP) and the mean rectal dose (%RD mean ) increased in proportion to the increase of PTV margin. However the bladder NTCP remained around zero to some extent regardless of the increase of PTV margin while the percentage bladder overlapping PTV (%V BOV ) and the mean bladder dose (%BD mean ) increased in proportion to the increase of PTV margin. Without relatively large rectum or small bladder, the increase observed for rectal NTCP, %RDmean and %BD mean per 1-mm PTV margin size were 1.84%, 2.44% and 2.90%, respectively. Unlike the behavior of the rectum or the bladder, the maximum dose on each femoral head had little effect on PTV margin. This quantitative study of the PTV margin reduction supported that IG-IMRT has enhanced the clinical effects over prostate cancer with the reduction of normal organ complications under the similar level of PTV control.

  20. The build-up, configuration, and dynamical sensitivity of the Eurasian ice-sheet complex to Late Weichselian climatic and oceanic forcing

    NASA Astrophysics Data System (ADS)

    Patton, Henry; Hubbard, Alun; Andreassen, Karin; Winsborrow, Monica; Stroeven, Arjen P.

    2016-12-01

    The Eurasian ice-sheet complex (EISC) was the third largest ice mass during the Last Glacial Maximum (LGM), after the Antarctic and North American ice sheets. Despite its global significance, a comprehensive account of its evolution from independent nucleation centres to its maximum extent is conspicuously lacking. Here, a first-order, thermomechanical model, robustly constrained by empirical evidence, is used to investigate the dynamics of the EISC throughout its build-up to its maximum configuration. The ice flow model is coupled to a reference climate and applied at 10 km spatial resolution across a domain that includes the three main spreading centres of the Celtic, Fennoscandian and Barents Sea ice sheets. The model is forced with the NGRIP palaeo-isotope curve from 37 ka BP onwards and model skill is assessed against collated flowsets, marginal moraines, exposure ages and relative sea-level history. The evolution of the EISC to its LGM configuration was complex and asynchronous; the western, maritime margins of the Fennoscandian and Celtic ice sheets responded rapidly and advanced across their continental shelves by 29 ka BP, yet the maximum aerial extent (5.48 × 106 km2) and volume (7.18 × 106 km3) of the ice complex was attained some 6 ka later at c. 22.7 ka BP. This maximum stand was short-lived as the North Sea and Atlantic margins were already in retreat whilst eastern margins were still advancing up until c. 20 ka BP. High rates of basal erosion are modelled beneath ice streams and outlet glaciers draining the Celtic and Fennoscandian ice sheets with extensive preservation elsewhere due to frozen subglacial conditions, including much of the Barents and Kara seas. Here, and elsewhere across the Norwegian shelf and North Sea, high pressure subglacial conditions would have promoted localised gas hydrate formation.

  1. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  2. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  3. Is the profitability of Canadian freestall farms associated with their performance on an animal welfare assessment?

    PubMed

    Villettaz Robichaud, M; Rushen, J; de Passillé, A M; Vasseur, E; Haley, D; Orsel, K; Pellerin, D

    2018-03-01

    Improving animal welfare on farm can sometimes require substantial financial investments. The Canadian dairy industry recently updated their Code of Practice for the care of dairy animals and created a mandatory on-farm animal care assessment (proAction Animal Care). Motivating dairy farmers to follow the recommendations of the Code of Practice and successfully meet the targets of the on-farm assessment can be enhanced by financial gain associated with improved animal welfare. The aim of the current study was to evaluate the association between meeting or not meeting several criteria from an on-farm animal welfare assessment and the farms' productivity and profitability indicators. Data from 130 freestall farms (20 using automatic milking systems) were used to calculate the results of the animal care assessment. Productivity and profitability indicators, including milk production, somatic cell count, reproduction, and longevity, were retrieved from the regional dairy herd improvement association databases. Economic margins over replacement costs were also calculated. Univariable and multivariable linear regression models were used to evaluate the associations between welfare and productivity and profitability indicators. The proportion of automatic milking system farms that met the proAction criterion for hock lesions was higher compared with parlor farms and lower for the neck lesion criterion. The proAction criterion for lameness prevalence was significantly associated with average corrected milk production per year. Average days in milk (DIM) at first breeding acted as an effect modifier for this association, resulting in a steeper increase of milk production in farms that met the criterion with increasing average DIM at first breeding. The reproduction and longevity indicators studied were not significantly associated with meeting or not meeting the proAction criteria investigated in this study. Meeting the proAction lameness prevalence parameter was associated with an increased profitability margin per cow over replacement cost by $236 compared with farms that did not. These results suggest that associations are present between meeting the lameness prevalence benchmark of the Animal Care proAction Initiative and freestall farms' productivity and profitability. Overall, meeting the animal-based criteria evaluated in this study was not detrimental to freestall farms' productivity and profitability. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  6. Maximization of the Thermoelectric Cooling of a Graded Peltier Device by Analytical Heat-Equation Resolution

    NASA Astrophysics Data System (ADS)

    Thiébaut, E.; Goupil, C.; Pesty, F.; D'Angelo, Y.; Guegan, G.; Lecoeur, P.

    2017-12-01

    Increasing the maximum cooling effect of a Peltier cooler can be achieved through material and device design. The use of inhomogeneous, functionally graded materials may be adopted in order to increase maximum cooling without improvement of the Z T (figure of merit); however, these systems are usually based on the assumption that the local optimization of the Z T is the suitable criterion to increase thermoelectric performance. We solve the heat equation in a graded material and perform both analytical and numerical analysis of a graded Peltier cooler. We find a local criterion that we use to assess the possible improvement of graded materials for thermoelectric cooling. A fair improvement of the cooling effect (up to 36%) is predicted for semiconductor materials, and the best graded system for cooling is described. The influence of the equation of state of the electronic gas of the material is discussed, and the difference in term of entropy production between the graded and the classical system is also described.

  7. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  8. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  9. SU-E-T-364: Estimating the Minimum Number of Patients Required to Estimate the Required Planning Target Volume Margins for Prostate Glands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhtiari, M; Schmitt, J; Sarfaraz, M

    2015-06-15

    Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less

  10. [Influence of different designs of marginal preparation on stress distribution in the mandibular premolar restored with endocrown].

    PubMed

    Guo, Jing; Wang, Xiao-Yu; Li, Xue-Sheng; Sun, Hai-Yang; Liu, Lin; Li, Hong-Bo

    2016-02-01

    To evaluate the effect of different designs of marginal preparation on stress distribution in the mandibular premolar restored with endocrown using three-dimensional finite element method. Four models with different designs of marginal preparation, including the flat margin, 90° shoulder, 135° shoulder and chamfer shoulder, were established to imitate mandibular first premolar restored with endocrown. A load of 100 N was applied to the intersection of the long axis and the occlusal surface, either parallel or with an angle of 45° to the long axis of the tooth. The maximum values of Von Mises stress and the stress distribution around the cervical region of the abutment and the endocrown with different designs of marginal preparation were analyzed. The load parallel to the long axis of the tooth caused obvious stress concentration in the lingual portions of both the cervical region of the tooth tissue and the restoration. The stress distribution characteristics on the cervical region of the models with a flat margin and a 90° shoulder were more uniform than those in the models with a 135° shoulder and chamfer shoulder. Loading at 45° to the long axis caused stress concentration mainly on the buccal portion of the cervical region, and the model with a flat margin showed the most favorable stress distribution patterns with a greater maximum Von Mises stress under this circumstance than that with a parallel loading. Irrespective of the loading direction, the stress value was the lowest in the flat margin model, where the stress value in the cervical region of the endocrown was greater than that in the counterpart of the tooth tissue. The stress level on the enamel was higher than that on the dentin nearby in the flat margin model. From the stress distribution point of view, endocrowns with flat margin followed by a 90° shoulder are recommended.

  11. Accuracy of digital images in the detection of marginal microleakage: an in vitro study.

    PubMed

    Alvarenga, Fábio Augusto; Andrade, Marcelo Ferrarezi; Pinelli, Camila; Rastelli, Alessanda Nara; Victorino, Keli Regina; Loffredo, Leonor de

    2012-08-01

    To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study. Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI). The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99). Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.

  12. Histopathological Validation of the Surface-Intermediate-Base Margin Score for Standardized Reporting of Resection Technique during Nephron Sparing Surgery.

    PubMed

    Minervini, Andrea; Campi, Riccardo; Kutikov, Alexander; Montagnani, Ilaria; Sessa, Francesco; Serni, Sergio; Raspollini, Maria Rosaria; Carini, Marco

    2015-10-01

    The surface-intermediate-base margin score is a novel standardized reporting system of resection techniques during nephron sparing surgery. We validated the surgeon assessed surface-intermediate-base score with microscopic histopathological assessment of partial nephrectomy specimens. Between June and August 2014 data were prospectively collected from 40 consecutive patients undergoing nephron sparing surgery. The surface-intermediate-base score was assigned to all cases. The score specific areas were color coded with tissue margin ink and sectioned for histological evaluation of healthy renal margin thickness. Maximum, minimum and mean thickness of healthy renal margin for each score specific area grade (surface [S] = 0, S = 1 ; intermediate [I] or base [B] = 0, I or B = 1, I or B = 2) was reported. The Mann-Whitney U and Kruskal-Wallis tests were used to compare the thickness of healthy renal margin in S = 0 vs 1 and I or B = 0 vs 1 vs 2 grades, respectively. Maximum, minimum and mean thickness of healthy renal margin was significantly different among score specific area grades S = 0 vs 1, and I or B = 0 vs 1, 0 vs 2 and 1 vs 2 (p <0.001). The main limitations of the study are the low number of the I or B = 1 and I or B = 2 samples and the assumption that each microscopic slide reflects the entire score specific area for histological analysis. The surface-intermediate-base scoring method can be readily harnessed in real-world clinical practice and accurately mirrors histopathological analysis for quantification and reporting of healthy renal margin thickness removed during tumor excision. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unrug, R.

    The break-up of Rodinia, the supercontinent assembled in the Middle Proterozoic chelogenic cycle (1.65--1.0 Ga), and the simultaneous assembly of the Gondwana Supercontinent were the major tectonic events of the Neoproterozoic. Laurentia occupied a central keystone position in the configuration of Rodinia. Its break-up resulted in rearrangement of Rodinia fragments: some were incorporated in the accreting Gondwana, while Laurentia, Baltica and Siberia drifted independently. Reconstructions of the position of Laurentia in the Rodinia Supercontinent are based on two criteria. The first is the continuity of Middle Proterozoic mobile belts suturing the older cratons and the match of piercing points ofmore » the mobile belts at the post- Middle Proterozoic margins of the older cratons. The second is the similarity of sedimentary sequences along Late Proterozoic passive margins formed during break-up of Rodinia. The first criterion allows for several interpretations. The second may be invalid, as conjugate margins developing over an oblique detachment will accumulate dissimilar sedimentary sequences. In reconstructions of the Gondwana Supercontinent the recently redefined Salvador-Congo craton occupied the central keystone position, between the East Gondwana continent and a number of smaller cratons of West Gondwana. It is entirely surrounded by collisional mobile belts, all containing important transcurrent shear zone systems. The margins of the Salvador-Congo craton were facing three major Late Proterozoic oceans.« less

  14. The effect of moonlight on observation of cloud cover at night, and application to cloud climatology

    NASA Technical Reports Server (NTRS)

    Hahn, Carole J.; Warren, Stephen G.; London, Julius

    1995-01-01

    Ten years of nighttime weather observations from the Northern Hemisphere in December were classified according to the illuminance of moonlight or twilight on the cloud tops, and a threshold level of illuminance was determined, above which the clouds are apparently detected adequately. This threshold corresponds to light from a full moon at an elevation angle of 6 deg, light from a partial moon at higher elevation, or twilight from the sun less than 9 deg bvelow the horizon. It permits the use of about 38% of the observations made with the sun below the horizon. The computed diurnal cycles of total cloud cover are altered considerably when this moonlight criterion is imposed. Maximum cloud cover over much of the ocean is now found to be at night or in the morning, whereas computations obtained without benefit of the moonlight criterion, as in our published atlases, showed the time of maximum to be noon or early afternoon in many regions. The diurnal cycles of total cloud cover we obtain are compared with those of the International Satellite Cloud Climatology Project (ISCCP) for a few regions; they are generally in better agreement if the moonlight criterion is imposed on the surface observations. Using the moonlight criterion, we have analyzed 10 years (1982-91) of surface weather observations over land and ocean, worldwide, for total cloud cover and for the frequency of occurrence of clear sky, fog, and precipitation. The global average cloud cover (average of day and night) is about 2% higher if the moonlight criterion is imposed than if all observations are used. The difference is greater in winter than in summer, because of the fewer hours of darkness in summer. The amplitude of the annual cycle of total cloud cover over the Arctic Ocean and at the South Pole is diminished by a few percent when the moonlight criterion is imposed. The average cloud cover for 1982-91 is found to be 55% for Northern Hemisphere land, 53% for Southern Hemisphere land, 66% for Northern Hemisphere ocean, and 70% for Southern Hemisphere ocean, giving a global average of 64%. The global average for daytime is 64.6%; for nighttime 63.3%.

  15. Comparison of marginal accuracy of castings fabricated by conventional casting technique and accelerated casting technique: an in vitro study.

    PubMed

    Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar

    2013-01-01

    Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.

  16. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  17. What Is Better Than Coulomb Failure Stress? A Ranking of Scalar Static Stress Triggering Mechanisms from 105 Mainshock-Aftershock Pairs

    NASA Astrophysics Data System (ADS)

    Meade, Brendan J.; DeVries, Phoebe M. R.; Faller, Jeremy; Viegas, Fernanda; Wattenberg, Martin

    2017-11-01

    Aftershocks may be triggered by the stresses generated by preceding mainshocks. The temporal frequency and maximum size of aftershocks are well described by the empirical Omori and Bath laws, but spatial patterns are more difficult to forecast. Coulomb failure stress is perhaps the most common criterion invoked to explain spatial distributions of aftershocks. Here we consider the spatial relationship between patterns of aftershocks and a comprehensive list of 38 static elastic scalar metrics of stress (including stress tensor invariants, maximum shear stress, and Coulomb failure stress) from 213 coseismic slip distributions worldwide. The rates of true-positive and false-positive classification of regions with and without aftershocks are assessed with receiver operating characteristic analysis. We infer that the stress metrics that are most consistent with observed aftershock locations are maximum shear stress and the magnitude of the second and third invariants of the stress tensor. These metrics are significantly better than random assignment at a significance level of 0.005 in over 80% of the slip distributions. In contrast, the widely used Coulomb failure stress criterion is distinguishable from random assignment in only 51-64% of the slip distributions. These results suggest that a number of alternative scalar metrics are better predictors of aftershock locations than classic Coulomb failure stress change.

  18. A new and efficient theoretical model to analyze chirped grating distributed feedback lasers

    NASA Astrophysics Data System (ADS)

    Arif, Muhammad

    Threshold conditions of a distributed feedback (DFB) laser with a linearly chirped grating are investigated using a new and efficient method. DFB laser with chirped grating is found to have significant effects on the lasing characteristics. The coupled wave equations for these lasers are derived and solved using a power series method to obtain the threshold condition. A Newton- Raphson routine is used to solve the threshold conditions numerically to obtain threshold gain and lasing wavelengths. To prove the validity of this model, it is applied to both conventional index-coupled and complex- coupled DFB lasers. The threshold gain margins are calculated as functions of the ratio of the gain coupling to index coupling (|κg|/|κ n|), and the phase difference between the index and gain gratings. It was found that for coupling coefficient |κ|l < 0.9, the laser shows a mode degeneracy at particular values of the ratio |κ g|/|κn|, for cleaved facets. We found that at phase differences π/2 and 3π/2, between the gain and index grating, for an AR-coated complex-coupled laser, the laser becomes multimode and a different mode starts to lase. We also studied the effect of the facet reflectivity (both magnitude and phase) on the gain margin of a complex- coupled DFB laser. Although, the gain margin varies slowly with the magnitude of the facet reflectivity, it shows large variations as a function of the phase. Spatial hole burning was found to be minimum at phase difference nπ, n = 0, 1, ... and maximum at phase differences π/2 and 3π/2. The single mode gain margin of an index-coupled linearly chirped CG-DFB is calculated for different chirping factors and coupling constants. We found that there is clearly an optimum chirping for which the single mode gain margin is maximum. The gain margins were calculated also for different positions of the cavity center. The effect of the facet reflectivities and their phases on the gain margin was investigated. We found the gain margin is maximum and the Spatial Hole Burning (SHB) is minimum for the cavity center at the middle of the laser cavity. Effect of chirping on the threshold gain, gain margin and spatial hole burning (SHB) for different parameters, such as the coupling coefficients, facet reflectivities, etc., of these lasers are studied. Single mode yield of these lasers are calculated and compared with that of a uniform grating DFB laser.

  19. Disputes over moral status: philosophy and science in the future of bioethics.

    PubMed

    Bortolotti, Lisa

    2007-06-01

    Various debates in bioethics have been focused on whether non-persons, such as marginal humans or non-human animals, deserve respectful treatment. It has been argued that, where we cannot agree on whether these individuals have moral status, we might agree that they have symbolic value and ascribe to them moral value in virtue of their symbolic significance. In the paper I resist the suggestion that symbolic value is relevant to ethical disputes in which the respect for individuals with no intrinsic moral value is in conflict with the interests of individuals with intrinsic moral value. I then turn to moral status and discuss the suitability of personhood as a criterion. There some desiderata for a criterion for moral status: it should be applicable on the basis of our current scientific knowledge; it should have a solid ethical justification; and it should be in line with some of our moral intuitions and social practices. Although it highlights an important connection between the possession of some psychological properties and eligibility for moral status, the criterion of personhood does not meet the desiderata above. I suggest that all intentional systems should be credited with moral status in virtue of having preferences and interests that are relevant to their well-being.

  20. Nonlinear self-sustained structures and fronts in spatially developing wake flows

    NASA Astrophysics Data System (ADS)

    Pier, Benoît; Huerre, Patrick

    2001-05-01

    A family of slowly spatially developing wakes with variable pressure gradient is numerically demonstrated to sustain a synchronized finite-amplitude vortex street tuned at a well-defined frequency. This oscillating state is shown to be described by a steep global mode exhibiting a sharp Dee Langer-type front at the streamwise station of marginal absolute instability. The front acts as a wavemaker which sends out nonlinear travelling waves in the downstream direction, the global frequency being imposed by the real absolute frequency prevailing at the front station. The nonlinear travelling waves are determined to be governed by the local nonlinear dispersion relation resulting from a temporal evolution problem on a local wake profile considered as parallel. Although the vortex street is fully nonlinear, its frequency is dictated by a purely linear marginal absolute instability criterion applied to the local linear dispersion relation.

  1. Simulation of magnetic holes formation in the magnetosheath

    NASA Astrophysics Data System (ADS)

    Ahmadi, Narges; Germaschewski, Kai; Raeder, Joachim

    2017-12-01

    Magnetic holes have been frequently observed in the Earth's magnetosheath and are believed to be the consequence of the nonlinear evolution of the mirror instability. Mirror mode perturbations mainly form as magnetic holes in regions where the plasma is marginally mirror stable with respect to the linear instability criterion. We present an expanding box particle-in-cell simulation to mimic the changing conditions in the magnetosheath as the plasma is convected through it that produces mirror mode magnetic holes. We show that in the initial nonlinear evolution, where the plasma conditions are mirror unstable, the magnetic peaks are dominant, while later, as the plasma relaxes toward marginal stability, the fluctuations evolve into deep magnetic holes. While the averaged plasma parameters in the simulation remain close to the mirror instability threshold, the local plasma in the magnetic holes is highly unstable to mirror instability and locally mirror stable in the magnetic peaks.

  2. [Value of asymmetry criterion in MRI for the diagnosis of small pelvic lymphadenopathies (inferior or equal to 1 cm)].

    PubMed

    Roy, C; Le Bras, Y; Mangold, L; Tuchmann, C; Vasilescu, C; Saussine, C; Jacqmin, D

    1996-12-01

    The purpose of this study was to determine if lymph node asymmetry in small (< 1.0 cm) pelvic nodes was a significant prognostic feature in determining metastatic disease. 216 patients who presented pelvic carcinoma underwent MR imaging. They were correlated to pathological findings obtained by surgery. We considered on the axial plan the maximum diameter (MAD) of both round or oval-shaped suspicious masses. Two different cut-off values were determined: node diameter superior to 1.0 cm (criterion 1) and node diameter superior to 0.5 cm with asymmetry relative to the opposite side for nodes ranging from 0.5 cm to 1.0 cm (criterion 2). With criterion 1 MR Imaging had an accuracy of 88%, a sensitivity of 65%, a specificity of 96%, a PPV of 88% and a NPV of 88% in detection of pelvic node metastasis. By considering criterion 2, MR Imaging had an accuracy of 85%, a sensitivity of 75%, a specificity of 89%, a PPV of 71% and a NPV of 91%. Normal small asymmetric lymph nodes were present in 5.6% of cases. Asymmetry of normal or inflammatory pelvic nodes is not uncommon. It cannot be relied on to diagnose metastatic involvement in cases of small suspicious lymph nodes, especially because of its low specificity and positive predictive value.

  3. The Mohr-Coulomb criterion for intact rock strength and friction - a re-evaluation and consideration of failure under polyaxial stresses

    NASA Astrophysics Data System (ADS)

    Hackston, A.; Rutter, E.

    2015-12-01

    Abstract Darley Dale and Pennant sandstones were tested under conditions of both axisymmetric shortening and extension normal to bedding. These are the two extremes of loading under polyaxial stress conditions. Failure under generalized stress conditions can be predicted from the Mohr-Coulomb failure criterion under axisymmetric compression conditions provided the best form of polyaxial failure criterion is known. The sandstone data are best reconciled using the Mogi (1967) empirical criterion. Fault plane orientations produced vary greatly with respect to the maximum compression direction in the two loading configurations. The normals to the Mohr-Coulomb failure envelopes do not predict the orientations of the fault planes eventually produced. Frictional sliding on variously inclined sawcuts and failure surfaces produced in intact rock samples was also investigated. Friction coefficient is not affected by fault plane orientation in a given loading configuration, but friction coefficients in extension were systematically lower than in compression for both rock types and could be reconciled by a variant on the Mogi (1967) failure criterion. Friction data for these and other porous sandstones accord well with the Byerlee (1977) generalization about rock friction being largely independent of rock type. For engineering and geodynamic modelling purposes, the stress-state dependent friction coefficient should be used for sandstones, but it is not known to what extent this might apply to other rock types.

  4. Assessment of tsunami hazard to the U.S. East Coast using relationships between submarine landslides and earthquakes

    USGS Publications Warehouse

    ten Brink, Uri S.; Lee, H.J.; Geist, E.L.; Twichell, D.

    2009-01-01

    Submarine landslides along the continental slope of the U.S. Atlantic margin are potential sources for tsunamis along the U.S. East coast. The magnitude of potential tsunamis depends on the volume and location of the landslides, and tsunami frequency depends on their recurrence interval. However, the size and recurrence interval of submarine landslides along the U.S. Atlantic margin is poorly known. Well-studied landslide-generated tsunamis in other parts of the world have been shown to be associated with earthquakes. Because the size distribution and recurrence interval of earthquakes is generally better known than those for submarine landslides, we propose here to estimate the size and recurrence interval of submarine landslides from the size and recurrence interval of earthquakes in the near vicinity of the said landslides. To do so, we calculate maximum expected landslide size for a given earthquake magnitude, use recurrence interval of earthquakes to estimate recurrence interval of landslide, and assume a threshold landslide size that can generate a destructive tsunami. The maximum expected landslide size for a given earthquake magnitude is calculated in 3 ways: by slope stability analysis for catastrophic slope failure on the Atlantic continental margin, by using land-based compilation of maximum observed distance from earthquake to liquefaction, and by using land-based compilation of maximum observed area of earthquake-induced landslides. We find that the calculated distances and failure areas from the slope stability analysis is similar or slightly smaller than the maximum triggering distances and failure areas in subaerial observations. The results from all three methods compare well with the slope failure observations of the Mw = 7.2, 1929 Grand Banks earthquake, the only historical tsunamigenic earthquake along the North American Atlantic margin. The results further suggest that a Mw = 7.5 earthquake (the largest expected earthquake in the eastern U.S.) must be located offshore and within 100??km of the continental slope to induce a catastrophic slope failure. Thus, a repeat of the 1755 Cape Anne and 1881 Charleston earthquakes are not expected to cause landslides on the continental slope. The observed rate of seismicity offshore the U.S. Atlantic coast is very low with the exception of New England, where some microseismicity is observed. An extrapolation of annual strain rates from the Canadian Atlantic continental margin suggests that the New England margin may experience the equivalent of a magnitude 7 earthquake on average every 600-3000??yr. A minimum triggering earthquake magnitude of 5.5 is suggested for a sufficiently large submarine failure to generate a devastating tsunami and only if the epicenter is located within the continental slope.

  5. Surgical margin-negative endoscopic mucosal resection with simple three-clipping technique: a randomized prospective study (with video).

    PubMed

    Mori, Hirohito; Kobara, Hideki; Nishiyama, Noriko; Fujihara, Shintaro; Kobayashi, Nobuya; Ayaki, Maki; Masaki, Tsutomu

    2016-11-01

    Although endoscopic mucosal resection is an established colorectal polyp treatment, local recurrence occurs in 13 % of cases due to inadequate snaring. We evaluated whether pre-clipping to the muscularis propria resulted in resected specimens with negative surgical margins without thermal denaturation. Of 245 polyps from 114 patients with colorectal polyps under 20 mm, we included 188 polyps from 81 patients. We randomly allocated polyps to the conventional injection group (CG) (97 polyps) or the pre-clipping injection group (PG) (91 polyps). The PG received three-point pre-clipping to ensure ample gripping to the muscle layer on the oral and both sides of the tumor with 4 mL local injection. Endoscopic ultrasonography was performed to measure the resulting bulge. Outcomes included the number of instances of thermal denaturation of the horizontal/vertical margin (HMX/VMX) or positive horizontal/vertical margins (HM+/VM+), the shortest distance from tumor margins to resected edges, and the maximum bulge distances from tumor surface to the muscularis propria. The numbers of HMX and HM+ in the CG and PG were 27 and 6, and 9 and 2 (P = 0.001), and VMX and VM+ were 8 and 5, and 0 and 0 (P = 0.057). The shortest distance from tumor margin to resected edge [median (range), mm] in polyps in the CG and PG was 0.6 (0-2.7) and 4.7 (2.1-8.9) (P = 0.018). The maximum bulge distances were 4.6 (3.0-8.0) and 11.0 (6.8-17.0) (P = 0.005). Pre-clipping enabled surgical margin-negative resection without thermal denaturation.

  6. Determination of the Nonlethal Margin Inside the Visible 'Ice-Ball' During Percutaneous Cryoablation of Renal Tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georgiades, Christos, E-mail: g_christos@hotmail.com; Rodriguez, Ronald, E-mail: rrodrig@jhmi.edu; Azene, Ezana, E-mail: eazene1@jhmi.edu

    2013-06-15

    Objective. The study was designed to determine the distance between the visible 'ice-ball' and the lethal temperature isotherm for normal renal tissue during cryoablation. Methods. The Animal Care Committee approved the study. Nine adult swine were used: three to determine the optimum tissue stain and six to test the hypotheses. They were anesthetized and the left renal artery was catheterized under fluoroscopy. Under MR guidance, the kidney was ablated and (at end of a complete ablation) the nonfrozen renal tissue (surrounding the 'ice-ball') was stained via renal artery catheter. Kidneys were explanted and sent for slide preparation and examination. Frommore » each slide, we measured the maximum, minimum, and an in-between distance from the stained to the lethal tissue boundaries (margin). We examined each slide for evidence of 'heat pump' effect. Results. A total of 126 measurements of the margin (visible 'ice-ball'-lethal margin) were made. These measurements were obtained from 29 slides prepared from the 6 test animals. Mean width was 0.75 {+-} 0.44 mm (maximum 1.15 {+-} 0.51 mm). It was found to increase adjacent to large blood vessels. No 'heat pump' effect was noted within the lethal zone. Data are limited to normal swine renal tissue. Conclusions. Considering the effects of the 'heat pump' phenomenon for normal renal tissue, the margin was measured to be 1.15 {+-} 0.51 mm. To approximate the efficacy of the 'gold standard' (partial nephrectomy, {approx}98 %), a minimum margin of 3 mm is recommended (3 Multiplication-Sign SD). Given these assumptions and extrapolating for renal cancer, which reportedly is more cryoresistant with a lethal temperature of -40 Degree-Sign C, the recommended margin is 6 mm.« less

  7. CPR methodology with new steady-state criterion and more accurate statistical treatment of channel bow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgartner, S.; Bieli, R.; Bergmann, U. C.

    2012-07-01

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This ismore » considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)« less

  8. Health Economic Data in Reimbursement of New Medical Technologies: Importance of the Socio-Economic Burden as a Decision-Making Criterion.

    PubMed

    Iskrov, Georgi; Dermendzhiev, Svetlan; Miteva-Katrandzhieva, Tsonka; Stefanov, Rumen

    2016-01-01

    Assessment and appraisal of new medical technologies require a balance between the interests of different stakeholders. Final decision should take into account the societal value of new therapies. This perspective paper discusses the socio-economic burden of disease as a specific reimbursement decision-making criterion and calls for the inclusion of it as a counterbalance to the cost-effectiveness and budget impact criteria. Socio-economic burden is a decision-making criterion, accounting for diseases, for which the assessed medical technology is indicated. This indicator is usually researched through cost-of-illness studies that systematically quantify the socio-economic burden of diseases on the individual and on the society. This is a very important consideration as it illustrates direct budgetary consequences of diseases in the health system and indirect costs associated with patient or carer productivity losses. By measuring and comparing the socio-economic burden of different diseases to society, health authorities and payers could benefit in optimizing priority setting and resource allocation. New medical technologies, especially innovative therapies, present an excellent case study for the inclusion of socio-economic burden in reimbursement decision-making. Assessment and appraisal have been greatly concentrated so far on cost-effectiveness and budget impact, marginalizing all other considerations. In this context, data on disease burden and inclusion of explicit criterion of socio-economic burden in reimbursement decision-making may be highly beneficial. Realizing the magnitude of the lost socio-economic contribution resulting from diseases in question could be a reasonable way for policy makers to accept a higher valuation of innovative therapies.

  9. A criterion for establishing life limits. [for Space Shuttle Main Engine service

    NASA Technical Reports Server (NTRS)

    Skopp, G. H.; Porter, A. A.

    1990-01-01

    The development of a rigorous statistical method that would utilize hardware-demonstrated reliability to evaluate hardware capability and provide ground rules for safe flight margin is discussed. A statistical-based method using the Weibull/Weibayes cumulative distribution function is described. Its advantages and inadequacies are pointed out. Another, more advanced procedure, Single Flight Reliability (SFR), determines a life limit which ensures that the reliability of any single flight is never less than a stipulated value at a stipulated confidence level. Application of the SFR method is illustrated.

  10. Can we use genetic and genomic approaches to identify candidate animals for targeted selective treatment.

    PubMed

    Laurenson, Yan C S M; Kyriazakis, Ilias; Bishop, Stephen C

    2013-10-18

    Estimated breeding values (EBV) for faecal egg count (FEC) and genetic markers for host resistance to nematodes may be used to identify resistant animals for selective breeding programmes. Similarly, targeted selective treatment (TST) requires the ability to identify the animals that will benefit most from anthelmintic treatment. A mathematical model was used to combine the concepts and evaluate the potential of using genetic-based methods to identify animals for a TST regime. EBVs obtained by genomic prediction were predicted to be the best determinant criterion for TST in terms of the impact on average empty body weight and average FEC, whereas pedigree-based EBVs for FEC were predicted to be marginally worse than using phenotypic FEC as a determinant criterion. Whilst each method has financial implications, if the identification of host resistance is incorporated into a wider genomic selection indices or selective breeding programmes, then genetic or genomic information may be plausibly included in TST regimes. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Analysis of the observed and intrinsic durations of Swift/BAT gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2016-07-01

    The duration distribution of 947 GRBs observed by Swift/BAT, as well as its subsample of 347 events with measured redshift, allowing to examine the durations in both the observer and rest frames, are examined. Using a maximum log-likelihood method, mixtures of two and three standard Gaussians are fitted to each sample, and the adequate model is chosen based on the value of the difference in the log-likelihoods, Akaike information criterion and Bayesian information criterion. It is found that a two-Gaussian is a better description than a three-Gaussian, and that the presumed intermediate-duration class is unlikely to be present in the Swift duration data.

  12. A Comparison of Propagation Between Apertured Bessel and Gaussian beams

    NASA Astrophysics Data System (ADS)

    Lin, Mei; Yu, Yanzhong

    2009-04-01

    A true Bessel beam is a family of diffraction-free beams. Thus the most interesting and attractive characteristic of such beam is non-diffracting propagation. In optics, the comparisons of maximum propagation distance had been done between Bessel and Gaussian beams by Durnin and Sprangle, respectively. However, the results obtained by them are conflict due to the difference between their criteria. Because Bessel beams have many potential applications in millimeter wave bands, therefore, it is necessary and significant that the comparison is carried out at these bands. A new contrast criterion at millimeter wavelengths is proposed in our paper. Under this criterion, the numerical results are presented and a new conclusion is drawn.

  13. Evolution of canalizing Boolean networks

    NASA Astrophysics Data System (ADS)

    Szejka, A.; Drossel, B.

    2007-04-01

    Boolean networks with canalizing functions are used to model gene regulatory networks. In order to learn how such networks may behave under evolutionary forces, we simulate the evolution of a single Boolean network by means of an adaptive walk, which allows us to explore the fitness landscape. Mutations change the connections and the functions of the nodes. Our fitness criterion is the robustness of the dynamical attractors against small perturbations. We find that with this fitness criterion the global maximum is always reached and that there is a huge neutral space of 100% fitness. Furthermore, in spite of having such a high degree of robustness, the evolved networks still share many features with “chaotic” networks.

  14. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  15. Spatial clustering of pixels of a multispectral image

    DOEpatents

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  16. Age and growth of the round stingray Urotrygon rogersi, a particularly fast-growing and short-lived elasmobranch.

    PubMed

    Mejía-Falla, Paola A; Cortés, Enric; Navia, Andrés F; Zapata, Fernando A

    2014-01-01

    We examined the age and growth of Urotrygon rogersi on the Colombian coast of the Eastern Tropical Pacific Ocean by directly estimating age using vertebral centra. We verified annual deposition of growth increments with marginal increment analysis. Eight growth curves were fitted to four data sets defined on the basis of the reproductive cycle (unadjusted or adjusted for age at first band) and size variables (disc width or total length). Model performance was evaluated using Akaike's Information Criterion (AIC), AIC weights and multi-model inference criteria. A two-phase growth function with adjusted age provided the best description of growth for females (based on five parameters, DW∞  =  20.1 cm, k  =  0.22 yr⁻¹) and males (based on four and five parameters, DW(∞)  =  15.5 cm, k  =  0.65 yr⁻¹). Median maturity of female and male U. rogersi is reached very fast (mean ± SE  =  1.0 ± 0.1 year). This is the first age and growth study for a species of the genus Urotrygon and results indicate that U. rogersi attains a smaller maximum size and has a shorter lifespan and lower median age at maturity than species of closely related genera. These life history traits are in contrast with those typically reported for other elasmobranchs.

  17. Programmable fuzzy associative memory processor

    NASA Astrophysics Data System (ADS)

    Shao, Lan; Liu, Liren; Li, Guoqiang

    1996-02-01

    An optical system based on the method of spatial area-coding and multiple image scheme is proposed for fuzzy associative memory processing. Fuzzy maximum operation is accomplished by a ferroelectric liquid crystal PROM instead of a computer-based approach. A relative subsethood is introduced here to be used as a criterion for the recall evaluation.

  18. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  19. Automated thematic mapping and change detection of ERTS-A images. [digital interpretation of Arizona imagery

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.

  20. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  1. Stability analysis of a run-of-river diversion hydropower plant with surge tank and spillway in the head pond.

    PubMed

    Sarasúa, José Ignacio; Elías, Paz; Martínez-Lucas, Guillermo; Pérez-Díaz, Juan Ignacio; Wilhelmi, José Román; Sánchez, José Ángel

    2014-01-01

    Run-of-river hydropower plants usually lack significant storage capacity; therefore, the more adequate control strategy would consist of keeping a constant water level at the intake pond in order to harness the maximum amount of energy from the river flow or to reduce the surface flooded in the head pond. In this paper, a standard PI control system of a run-of-river diversion hydropower plant with surge tank and a spillway in the head pond that evacuates part of the river flow plant is studied. A stability analysis based on the Routh-Hurwitz criterion is carried out and a practical criterion for tuning the gains of the PI controller is proposed. Conclusions about the head pond and surge tank areas are drawn from the stability analysis. Finally, this criterion is applied to a real hydropower plant in design state; the importance of considering the spillway dimensions and turbine characteristic curves for adequate tuning of the controller gains is highlighted.

  2. Stability Analysis of a Run-of-River Diversion Hydropower Plant with Surge Tank and Spillway in the Head Pond

    PubMed Central

    Sarasúa, José Ignacio; Elías, Paz; Wilhelmi, José Román; Sánchez, José Ángel

    2014-01-01

    Run-of-river hydropower plants usually lack significant storage capacity; therefore, the more adequate control strategy would consist of keeping a constant water level at the intake pond in order to harness the maximum amount of energy from the river flow or to reduce the surface flooded in the head pond. In this paper, a standard PI control system of a run-of-river diversion hydropower plant with surge tank and a spillway in the head pond that evacuates part of the river flow plant is studied. A stability analysis based on the Routh-Hurwitz criterion is carried out and a practical criterion for tuning the gains of the PI controller is proposed. Conclusions about the head pond and surge tank areas are drawn from the stability analysis. Finally, this criterion is applied to a real hydropower plant in design state; the importance of considering the spillway dimensions and turbine characteristic curves for adequate tuning of the controller gains is highlighted. PMID:25405237

  3. Physical Employment Standards for UK Firefighters

    PubMed Central

    Stevenson, Richard D.M.; Siddall, Andrew G.; Turner, Philip F.J.; Bilzon, James L.J.

    2017-01-01

    Objective: The aim of this study was to assess sensitivity and specificity of surrogate physical ability tests as predictors of criterion firefighting task performance and to identify corresponding minimum muscular strength and endurance standards. Methods: Fifty-one (26 male; 25 female) participants completed three criterion tasks (ladder lift, ladder lower, ladder extension) and three corresponding surrogate tests [one-repetition maximum (1RM) seated shoulder press; 1RM seated rope pull-down; repeated 28 kg seated rope pull-down]. Surrogate test standards were calculated that best identified individuals who passed (sensitivity; true positives) and failed (specificity; true negatives) criterion tasks. Results: Best sensitivity/specificity achieved were 1.00/1.00 for a 35 kg seated shoulder press, 0.79/0.92 for a 60 kg rope pull-down, and 0.83/0.93 for 23 repetitions of the 28 kg rope pull-down. Conclusions: These standards represent performance on surrogate tests commensurate with minimum acceptable performance of essential strength-based occupational tasks in UK firefighters. PMID:28045801

  4. Modeling of weak blast wave propagation in the lung.

    PubMed

    D'yachenko, A I; Manyuhina, O V

    2006-01-01

    Blast injuries of the lung are the most life-threatening after an explosion. The choice of physical parameters responsible for trauma is important to understand its mechanism. We developed a one-dimensional linear model of an elastic wave propagation in foam-like pulmonary parenchyma to identify the possible cause of edema due to the impact load. The model demonstrates different injury localizations for free and rigid boundary conditions. The following parameters were considered: strain, velocity, pressure in the medium and stresses in structural elements, energy dissipation, parameter of viscous criterion. Maximum underpressure is the most suitable wave parameter to be the criterion for edema formation in a rabbit lung. We supposed that observed scattering of experimental data on edema severity is induced by the physiological variety of rabbit lungs. The criterion and the model explain this scattering. The model outlines the demands for experimental data to make an unambiguous choice of physical parameters responsible for lung trauma due to impact load.

  5. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  6. Raman mediated all-optical cascadable inverter using silicon-on-insulator waveguides.

    PubMed

    Sen, Mrinal; Das, Mukul K

    2013-12-01

    In this Letter, we propose an all-optical circuit for a cascadable and integrable logic inverter based on stimulated Raman scattering. A maximum product criteria for noise margin is taken to analyze the cascadability of the inverter. Variation of noise margin for different model parameters is also studied. Finally, the time domain response of the inverter is analyzed for different widths of input pulses.

  7. TH-CD-202-11: Implications for Online Adaptive and Non-Adaptive Radiotherapy of Gastic and Gastroesophageal Junction Cancers Using MRI-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittauer, K; Geurts, M; Toya, R

    Purpose: Radiotherapy for gastric and gastroesophageal junction (GEJ) tumors commonly requires large margins due to deformation, motion and variable changes of the stomach anatomy, at the risk of increased normal tissue toxicities. This work quantifies the interfraction variation of stomach deformation from daily MRI-guided radiotherapy to allow for a more targeted determination of margin expansion in the treatment of gastric and GEJ tumors. Methods: Five patients treated for gastric (n=3) and gastroesophageal junction (n=2) cancers with conventionally fractionated radiotherapy underwent daily MR imaging on a clinical MR-IGRT system. Treatment planning and contours were performed based on the MR simulation. Themore » stomach was re-contoured on each daily volumetric setup MR. Dice similarity coefficients (DSC) of the daily stomach were computed to evaluate the stomach interfraction deformation. To evaluate the stomach margin, the maximum Hausdorff distance (HD) between the initial and fractional stomach surface was measured for each fraction. The margin expansion, needed to encompass all fractions, was evaluated from the union of all fractional stomachs. Results: In total, 94 fractions with daily stomach contours were evaluated. For the interfraction stomach differences, the average DSC was 0.67±0.1 for gastric and 0.62±0.1 for GEJ cases. The maximum HD of each fraction was 3.5±2.0cm (n=94) with mean HD of 0.8±0.4cm (across all surface voxels for all fractions). The margin expansion required to encompass all individual fractions (averaged across 5 patients) was 1.4 cm(superior), 2.3 cm(inferior), 2.5 cm(right), 3.2 cm(left), 3.7 cm(anterior), 3.4 cm(posterior). Maximum observed difference for margin expansion was 8.7cm(posterior) among one patient. Conclusion: We observed a notable interfractional change in daily stomach shape (i.e., mean DSC of 0.67, p<0.0001) in both gastric and GEJ patients, for which adaptive radiotherapy is indicated. A minimum PTV margin of 3 cm is indicated to account for interfraction stomach changes when adaptive radiotherapy is not available. M. Bassetti: Travel funding from ViewRay, Inc.« less

  8. A comparison of the marginal adaptation of cathode-arc vapor-deposited titanium and cast base metal copings

    PubMed Central

    Wu, JC; Lai, LC; Sheets, CG; Earthman, J; Newcomb, R

    2011-01-01

    Statement of problem A new fabrication process has been developed where a titanium coping, which has a gold colored titanium nitride outer layer can be reliably fused to porcelain, but the marginal adaptation characteristics are still undetermined. Purpose The primary purpose of this study is to compare the rate of Clinically Acceptable Marginal Adaptation (CAMA-defined as a marginal gap mean ≤60 μm) of cathode-arc vapor-deposited titanium with the CAMA rate for the cast base metal copings. In addition, the study will evaluate the marginal gap scores themselves to assess their mean difference between the two study groups. Finally, the study will present two analyses of group differences in variability to support the contention that the titanium copings perform more consistently than their base metal counterparts. Material and methods Thirty-seven cathode-arc vapor-deposited titanium copings and 40 cast base metal copings were evaluated by computer-based image analysis using an optical microscope. The conventional lost wax technique was used to fabricate the 40 cast base metal copings that were 0.3 mm thick. The titanium copings were 0.3 mm thick and were formed by a collection of atomic titanium vapor onto a refractory die duplicate in a high vacuum chamber. Fifty vertical marginal gap measurements were collected from each of the 77 copings and the mean of these measurements was computed to form a gap score for each coping. Next, the gap score was compared to the 60 μm criterion to classify each coping as to whether it did or did not achieve Clinically Acceptable Marginal Adaption (CAMA). A comparison of the CAMA rates for each type of coping was used to address the primary purpose of this study. In addition, the gap scores themselves were used to test the (one-sided) hypothesis that the mean of the titanium gap scores is smaller than the mean of the base metal gap scores. Finally, the assertion that the titanium copings provide more consistency in their marginal gap performance was tested in two ways. First, the means of the titanium gap scores were compared to the means of the marginal gap scores for the base metal copings. Second, the standard deviations of the marginal gap scores for the titanium copings were compared with those for the base metal copings. Results Statistical comparison of the CAMA rates for each type of coping showed that the CAMA criterion was achieved by 24 of the 37 (64.86%) titanium copings, while 19 of the 40 (47.50%) base metal copings met this same standard. Noninferiority of the titanium copings was established by the 2-sided 90% Confidence Interval for the 17.36% difference in these rates (−0.95%, 35.68%) and noninferiority of titanium coping adaption was also demonstrated by the Wald Test rejection of the tentative hypothesis of inferiority (Z-score=1.9191, one-sided p=0.0275). The mean of the vertical marginal gap scores for the titanium copings (56.9025) was significantly less than the mean of the marginal gap scores for the base metal copings (71.9041) as shown by the Satterthwaite t-score=−2.29 (one-sided p=0.0126). To compare the adaption consistency of the titanium copings to the base metal counterparts the difference between the variance of the marginal gap scores for the titanium copings (594.843) and the variance of the marginal gap scores for the base metal copings (1510.901) was found to be statistically significant (Folded-F test score=2.63, p=0.0042). Our second method for showing that the titanium copings performed more consistently than the base metal comparisons was to use a one-sided test to show that the mean of the standard deviations of the vertical gap measurements for each titanium coping (29.9835) was significantly lower than the mean of the standard deviations of the vertical gap measurements for each base metal coping (36.1332). This test produced a Satterthwaite’s t-score of −2.24 (one-sided p=0.0141), indicating the titanium adaption was significantly more consistent. Conclusions Cathode-arc vapor deposited titanium copings exhibited a higher rate of Clinically Acceptable Marginal Adaption (CAMA) than the comparison base metal copings. Comparison of the coping marginal adaption score variances and direct assessment of the coping marginal adaption scores provided additional evidence that the titanium copings performed better and with more consistency than their base metal counterparts. PMID:21640242

  9. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  10. Miocene-Recent sediment flux in the south-central Alaskan fore-arc basin governed by flat-slab subduction

    NASA Astrophysics Data System (ADS)

    Finzel, Emily S.; Enkelmann, Eva

    2017-04-01

    The Cook Inlet in south-central Alaska contains the early Oligocene to Recent stratigraphic record of a fore-arc basin adjacent to a shallowly subducting oceanic plateau. Our new measured stratigraphic sections and detrital zircon U-Pb geochronology and Hf isotopes from Neogene strata and modern rivers illustrate the effects of flat-slab subduction on the depositional environments, provenance, and subsidence in fore-arc sedimentary systems. During the middle Miocene, fluvial systems emerged from the eastern, western, and northern margins of the basin. The axis of maximum subsidence was near the center of the basin, suggesting equal contributions from subsidence drivers on both margins. By the late Miocene, the axis of maximum subsidence had shifted westward and fluvial systems originating on the eastern margin of the basin above the flat-slab traversed the entire width of the basin. These mud-dominated systems reflect increased sediment flux from recycling of accretionary prism strata. Fluvial systems with headwaters above the flat-slab region continued to cross the basin during Pliocene time, but a change to sandstone-dominated strata with abundant volcanogenic grains signals a reactivation of the volcanic arc. The axis of maximum basin subsidence during late Miocene to Pliocene time is parallel to the strike of the subducting slab. Our data suggest that the character and strike-orientation of the down-going slab may provide a fundamental control on the nature of depositional systems, location of dominant provenance regions, and areas of maximum subsidence in fore-arc basins.

  11. The use of functionally graded dental crowns to improve biocompatibility: a finite element analysis.

    PubMed

    Mahmoudi, Mojtaba; Saidi, Ali Reza; Hashemipour, Maryam Alsadat; Amini, Parviz

    2018-02-01

    In post-core crown restorations, the significant mismatch between stiffness of artificial crowns and dental tissues leads to stress concentration at the interfaces. The aim of the present study was to reduce the destructive stresses by using a class of inhomogeneous materials called functionally graded materials (FGMs). For the purpose of the study, a 3-dimentional computer model of a premolar tooth and its surrounding tissues were generated. A post-core crown restoration with various crown materials, homogenous and FGM materials, were simulated and analyzed by finite element method. Finite element and statistical analysis showed that, in case of oblique loading, a significant difference (p < 0.05) was found at the maximum von Mises stresses of the crown margin between FGM and homogeneous crowns. The maximum von Mises stresses of the crown margin generated by FGM crowns were lower than those generated by homogenous crowns (70.8 vs. 46.3 MPa) and alumina crown resulted in the highest von Mises stress at the crown margin (77.7 MPa). Crown materials of high modulus of elasticity produced high stresses at the cervical region. FGM crowns may reduce the stress concentration at the cervical margins and consequently reduce the possibility of fracture.

  12. Maximum Margin Clustering of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  13. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  14. The Mohr-Coulomb criterion for intact rock strength and friction - a re-evaluation and consideration of failure under polyaxial stresses

    NASA Astrophysics Data System (ADS)

    Hackston, Abigail; Rutter, Ernest

    2016-04-01

    Darley Dale and Pennant sandstones were tested under conditions of both axisymmetric shortening and extension normal to bedding. These are the two extremes of loading under polyaxial stress conditions. Failure under generalized stress conditions can be predicted from the Mohr-Coulomb failure criterion under axisymmetric shortening conditions, provided the best form of polyaxial failure criterion is known. The sandstone data are best reconciled using the Mogi (1967) empirical criterion. Fault plane orientations produced vary greatly with respect to the maximum compressive stress direction in the two loading configurations. The normals to the Mohr-Coulomb failure envelopes do not predict the orientations of the fault planes eventually produced. Frictional sliding on variously inclined saw cuts and failure surfaces produced in intact rock samples was also investigated. Friction coefficient is not affected by fault plane orientation in a given loading configuration, but friction coefficients in extension were systematically lower than in compression for both rock types. Friction data for these and other porous sandstones accord well with the Byerlee (1978) generalization about rock friction being largely independent of rock type. For engineering and geodynamic modelling purposes, the stress-state-dependent friction coefficient should be used for sandstones, but it is not known to what extent this might apply to other rock types.

  15. Assessment of released heavy metals from electrical and electronic equipment (EEE) existing in shipwrecks through laboratory-scale simulation reactor.

    PubMed

    Hahladakis, John N; Stylianos, Michailakis; Gidarakos, Evangelos

    2013-04-15

    In a passenger ship, the existence of EEE is obvious. In time, under shipwreck's conditions, all these materials will undergo an accelerated severe corrosion, due to salt water, releasing, consequently, heavy metals and other hazardous substances in the aquatic environment. In this study, a laboratory-scale reactor was manufactured in order to simulate the conditions under which the "Sea Diamond" shipwreck lies (14 bars of pressure and 16°C of temperature) and remotely observe and assess any heavy metal release that would occur, from part of the EEE present in the ship, into the sea. Ten metals were examined and the results showed that zinc, mercury and copper were abundant in the water samples taken from the reactor and in significantly higher concentrations compared to the US EPA CMC (criterion maximum concentration) criterion. Moreover, nickel and lead were found in concentrations higher than the CCC (criterion constant concentration) criterion set by the US EPA for clean seawater. The rest of the elements were measured in concentrations within the permissible limits. It is therefore of environmental benefit to salvage the wreck and recycle all the WEEE found in it. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Approximation for the Rayleigh Resolution of a Circular Aperture

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2009-01-01

    Rayleigh's criterion states that a pair of point sources are barely resolved by an optical instrument when the central maximum of the diffraction pattern due to one source coincides with the first minimum of the pattern of the other source. As derived in standard introductory physics textbooks, the first minimum for a rectangular slit of width "a"…

  17. 34 CFR 642.22 - How does the Secretary evaluate prior experience?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Secretary may add from 1 to 15 points to the point score obtained on the basis of the selection criteria in § 642.21, based on the applicant's success in meeting the administrative requirements and programmatic objectives of paragraph (e) of this section. (2) The maximum possible score for each criterion is indicated...

  18. New true-triaxial rock strength criteria considering intrinsic material characteristics

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Li, Cheng; Quan, Xiaowei; Wang, Yanning; Yu, Liyuan; Jiang, Binsong

    2018-02-01

    A reasonable strength criterion should reflect the hydrostatic pressure effect, minimum principal stress effect, and intermediate principal stress effect. The former two effects can be described by the meridian curves, and the last one mainly depends on the Lode angle dependence function. Among three conventional strength criteria, i.e. Mohr-Coulomb (MC), Hoek-Brown (HB), and Exponent (EP) criteria, the difference between generalized compression and extension strength of EP criterion experience a firstly increase then decrease process, and tends to be zero when hydrostatic pressure is big enough. This is in accordance with intrinsic rock strength characterization. Moreover, the critical hydrostatic pressure I_c corresponding to the maximum difference of between generalized compression and extension strength can be easily adjusted by minimum principal stress influence parameter K. So, the exponent function is a more reasonable meridian curves, which well reflects the hydrostatic pressure effect and is employed to describe the generalized compression and extension strength. Meanwhile, three Lode angle dependence functions of L_{{MN}}, L_{{WW}}, and L_{{YMH}}, which unconditionally satisfy the convexity and differential requirements, are employed to represent the intermediate principal stress effect. Realizing the actual strength surface should be located between the generalized compression and extension surface, new true-triaxial criteria are proposed by combining the two states of EP criterion by Lode angle dependence function with a same lode angle. The proposed new true-triaxial criteria have the same strength parameters as EP criterion. Finally, 14 groups of triaxial test data are employed to validate the proposed criteria. The results show that the three new true-triaxial exponent criteria, especially the Exponent Willam-Warnke criterion (EPWW) criterion, give much lower misfits, which illustrates that the EP criterion and L_{{WW}} have more reasonable meridian and deviatoric function form, respectively. The proposed new true-triaxial strength criteria can provide theoretical foundation for stability analysis and optimization of support design of rock engineering.

  19. Mechanisms of deformation and fracture in high temperature low cycle fatigue of Rene 80 and IN 100

    NASA Technical Reports Server (NTRS)

    Romanoski, G. R., Jr.

    1982-01-01

    Specimens tested for the AGARD strain range partitioning program were investigated. Rene 80 and IN 100 were tested in air and in vacuum; at 871 C, 925 C, and 1000 C; and in the coated and uncoated condition. The specimens exhibited a multiplicity of high-temperature low-cycle fatigue damage. Observations of the various forms of damage were consistent with material and testing conditions and were generally in agreement with previous studies. In every case observations support a contention that failure occurs at a particular combination of crack length and maximum stress. A failure criterion which is applicable in the regime of testing studied is presented. The predictive capabilities of this criterion are straight forward.

  20. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  1. The northern Uummannaq Ice Stream System, West Greenland: ice dynamics and and controls upon deglaciation

    NASA Astrophysics Data System (ADS)

    Lane, Timothy; Roberts, David; Rea, Brice; Cofaigh, Colm Ó.; Vieli, Andreas

    2013-04-01

    At the Last Glacial Maximum (LGM), the Uummannaq Ice Stream System comprised a series coalescent outlet glaciers which extended along the trough to the shelf edge, draining a large proportion of the West Greenland Ice Sheet. Geomorphological mapping, terrestrial cosmogenic nuclide (TCN) exposure dating, and radiocarbon dating constrain warm-based ice stream activity in the north of the system to 1400 m a.s.l. during the LGM. Intervening plateaux areas (~ 2000 m a.s.l.) either remained ice free, or were covered by cold-based icefields, preventing diffluent or confluent flow throughout the inner to outer fjord region. Beyond the fjords, a topographic sill north of Ubekendt Ejland prevented the majority of westward ice flow, forcing it south through Igdlorssuit Sund, and into the Uummannaq Trough. Here it coalesced with ice from the south, forming the trunk zone of the UISS. Deglaciation of the UISS began at 14.9 cal. ka BP, rapidly retreating through the overdeepened Uummannaq Trough. Once beyond Ubekendt Ejland, the northern UISS retreated northwards, separating from the south. Retreat continued, and ice reached the present fjord confines in northern Uummannaq by 11.6 kyr. Both geomorphological (termino-lateral moraines) and geochronological (14C and TCN) data provide evidence for an ice marginal stabilisation at within Karrat-Rink Fjord, at Karrat Island, from 11.6-6.9 kyr. The Karrat moraines appear similar in both fjord position and form to 'Fjord Stade' moraines identified throughout West Greenland. Though chronologies constraining moraine formation are overlapping (Fjord Stade moraines - 9.3-8.2 kyr, Karrat moraines - 11.6-6.9 kyr), these moraines have not been correlated. This ice margin stabilisation was able to persist during the Holocene Thermal Maximum (~7.2 - 5 kyr). It overrode climatic and oceanic forcings, remaining on Karrat Island throughout peaks of air temperature and relative sea-level, and during the influx of the warm West Greenland Current into the Uummannaq region. Based upon analysis of fjord bathymetry and width, this ice marginal stabilisation has been shown to have been caused by increases in topographic constriction at Karrat Island. The location of the marginal stillstand is coincident with a dramatic narrowing of fjord width and bed shallowing. These increases in local lateral resistance reduces the ice flux necessary to maintain a stable grounding line, leading to ice margin stabilisation. This acted to negate the effects of the Holocene Thermal Maximum. Following this stabilisation, retreat within Rink-Karrat Fjord continued, driven by calving into the overdeepened Rink Fjord. Rink Isbræ reached its present ice margin or beyond after 5 kyr, during the Neoglacial. In contrast, the southern UISS reached its present margin at 8.7 kyr and Jakobshavn Isbræ reached its margin by 7 kyr. This work therefore provides compelling evidence for topographically forced asynchronous, non-linear ice stream retreat between outlet glaciers in West Greenland. In addition, it has major implications for our understanding and reconstruction of mid-Holocene ice sheet extent, and ice sheet dynamics during the Holocene Thermal Maximum to Neoglacial switch.

  2. A quantitative analysis of transtensional margin width

    NASA Astrophysics Data System (ADS)

    Jeanniot, Ludovic; Buiter, Susanne J. H.

    2018-06-01

    Continental rifted margins show variations between a few hundred to almost a thousand kilometres in their conjugated widths from the relatively undisturbed continent to the oceanic crust. Analogue and numerical modelling results suggest that the conjugated width of rifted margins may have a relationship to their obliquity of divergence, with narrower margins occurring for higher obliquity. We here test this prediction by analysing the obliquity and rift width for 26 segments of transtensional conjugate rifted margins in the Atlantic and Indian Oceans. We use the plate reconstruction software GPlates (http://www.gplates.org) for different plate rotation models to estimate the direction and magnitude of rifting from the initial phases of continental rifting until breakup. Our rift width corresponds to the distance between the onshore maximum topography and the last identified continental crust. We find a weak positive correlation between the obliquity of rifting and rift width. Highly oblique margins are narrower than orthogonal margins, as expected from analogue and numerical models. We find no relationships between rift obliquities and rift duration nor the presence or absence of Large Igneous Provinces (LIPs).

  3. EPRB Gedankenexperiment and Entanglement with Classical Light Waves

    NASA Astrophysics Data System (ADS)

    Rashkovskiy, Sergey A.

    2018-06-01

    In this article we show that results similar to those of the Einstein-Podolsky-Rosen-Bohm (EPRB) Gedankenexperiment and entanglement of photons can be obtained using weak classical light waves if we take into account the discrete (atomic) structure of the detectors and a specific nature of the light-atom interaction. We show that the CHSH (Clauser, Horne, Shimony, and Holt) criterion in the EPRB Gedankenexperiment with classical light waves can exceed not only the maximum value SHV=2 that is predicted by the local hidden-variable theories but also the maximum value S_{QM} = 2√2 predicted by quantum mechanics.

  4. Failure Assessment of Brazed Structures

    NASA Technical Reports Server (NTRS)

    Flom, Yuri

    2012-01-01

    Despite the great advances in analytical methods available to structural engineers, designers of brazed structures have great difficulties in addressing fundamental questions related to the loadcarrying capabilities of brazed assemblies. In this chapter we will review why such common engineering tools as Finite Element Analysis (FEA) as well as many well-established theories (Tresca, von Mises, Highest Principal Stress, etc) don't work well for the brazed joints. This chapter will show how the classic approach of using interaction equations and the less known Coulomb-Mohr failure criterion can be employed to estimate Margins of Safety (MS) in brazed joints.

  5. Intra-fraction motion of larynx radiotherapy

    NASA Astrophysics Data System (ADS)

    Durmus, Ismail Faruk; Tas, Bora

    2018-02-01

    In early stage laryngeal radiotherapy, movement is an important factor. Thyroid cartilage can move from swallowing, breathing, sound and reflexes. The effects of this motion on the target volume (PTV) during treatment were examined. In our study, the target volume movement during the treatment for this purpose was examined. Thus, setup margins are re-evaluated and patient-based PTV margins are determined. Intrafraction CBCT was scanned in 246 fractions for 14 patients. During the treatment, the amount of deviation which could be lateral, vertical and longitudinal axis was determined. ≤ ± 0.1cm deviation; 237 fractions in the lateral direction, 202 fractions in the longitudinal direction, 185 fractions in the vertical direction. The maximum deviation values were found in the longitudinal direction. Intrafraction guide in laryngeal radiotherapy; we are sure of the correctness of the treatment, the target volume is to adjust the margin and dose more precisely, we control the maximum deviation of the target volume for each fraction. Although the image quality of intrafraction-CBCT scans was lower than the image quality of planning CT, they showed sufficient contrast for this work.

  6. On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle

    NASA Astrophysics Data System (ADS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-09-01

    The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.

  7. On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.

  8. System Architecture of Small Unmanned Aerial System for Flight Beyond Visual Line-of-Sight

    DTIC Science & Technology

    2015-09-17

    Signal Strength PT = Transmitter Power GT = Transmitter antenna gain LT = Transmitter loss Lp = Propagation loss GR = Receiver antenna...gain (dBi) LR(db) = Receiver losses (dB) 15 Lm = Link margin (dB) PT = Transmitter Power (dBm) GT = Transmitter antenna gain (dBi) LT... Transmitter loss (dB) The maximum range is determined by four components, 1) Transmission, 2) Propagation, 3) Reception and 4) Link Margin

  9. Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models

    PubMed Central

    Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark

    2015-01-01

    This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart. PMID:25909047

  10. Development and validation of a new instrument for testing functional health literacy in Japanese adults.

    PubMed

    Nakagami, Katsuyuki; Yamauchi, Toyoaki; Noguchi, Hiroyuki; Maeda, Tohru; Nakagami, Tomoko

    2014-06-01

    This study aimed to develop a reliable and valid measure of functional health literacy in a Japanese clinical setting. Test development consisted of three phases: generation of an item pool, consultation with experts to assess content validity, and comparison with external criteria (the Japanese Health Knowledge Test) to assess criterion validity. A trial version of the test was administered to 535 Japanese outpatients. Internal consistency reliability, calculated by Cronbach's alpha, was 0.81, and concurrent validity was moderate. Receiver Operating Characteristics and Item Response Theory were used to classify patients as having adequate, marginal, or inadequate functional health literacy. Both inadequate and marginal functional health literacy were associated with older age, lower income, lower educational attainment, and poor health knowledge. The time required to complete the test was 10-15 min. This test should enable health workers to better identify patients with inadequate health literacy. © 2013 Wiley Publishing Asia Pty Ltd.

  11. 46 CFR 170.173 - Criterion for vessels of unusual proportion and form.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... routes, there must be— (i) Positive righting arms to at least 35 degrees of heel; (ii) No down flooding... following angles: (A) Angle of maximum righting arm. (B) Angle of down flooding. (C) 40 degrees. (2) For... flooding point to at least 15 degrees; and (iii) At least 10 foot-degrees of energy to the smallest of the...

  12. 46 CFR 170.173 - Criterion for vessels of unusual proportion and form.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... routes, there must be— (i) Positive righting arms to at least 35 degrees of heel; (ii) No down flooding... following angles: (A) Angle of maximum righting arm. (B) Angle of down flooding. (C) 40 degrees. (2) For... flooding point to at least 15 degrees; and (iii) At least 10 foot-degrees of energy to the smallest of the...

  13. 46 CFR 170.173 - Criterion for vessels of unusual proportion and form.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... routes, there must be— (i) Positive righting arms to at least 35 degrees of heel; (ii) No down flooding... following angles: (A) Angle of maximum righting arm. (B) Angle of down flooding. (C) 40 degrees. (2) For... flooding point to at least 15 degrees; and (iii) At least 10 foot-degrees of energy to the smallest of the...

  14. External Catalyst Breakup Phenomena

    DTIC Science & Technology

    1976-06-01

    catalyst particle can cause high internal pressures which result in particle destruction. Analytical results suggest rhat erosion effects from solid...mechanisms. * Pressure Forces. High G loadings and bed pressure drops should be avoided. Bed pre-loads should be kept at a minimum value. Thruster...5.2.7.1 Failure Theories ............................ 243 5.2.7.2 Maximum Tension Stress Criterion ............ 244 5.2.7.3 Distortion Energy Approach

  15. Examination of the Views of High School Teachers and Students with Regard to Discipline Perception and Discipline Problems

    ERIC Educational Resources Information Center

    Sadik, Fatma; Yalcin, Onur

    2018-01-01

    This research is a qualitative study comparatively examining the views of high school teachers and students related to discipline perception and discipline problems. The study has been realized at a vocational school during the 2014/2015 school term. Maximum diversity and criterion sampling methods have been followed for the formation of the study…

  16. [Biaggi law, transformation of the labor market, protection of health at the workplace].

    PubMed

    Menegozzo, M; Diglio, G; Canfora, M L; Menegozzo, S; Quagliuolo, R

    2003-01-01

    The new legislation on the labor market (Biagi Law, Ministerial Decree approved by the Council of Ministers on 06.06.2003) introduces new contractual profiles imprinted to the criterion of maximum mobility and flexibility. This new legislation does not appear equipped from a parallel legislation that guarantees occupational safety and the protection of the new professional figures.

  17. Validity of the inexpensive Stepping Meter in counting steps in free living conditions: a pilot study

    PubMed Central

    De Cocker, K; Cardon, G; De Bourdeaudhuij, I

    2006-01-01

    Objectives To evaluate if inexpensive Stepping Meters are valid in counting steps in adults in free living conditions. Methods For six days, 35 healthy volunteers wore a criterion Yamax Digiwalker and five Stepping Meters every day until all 973 pedometers had been tested. Steps were recorded daily, and the differences between counts from the Digiwalker and the Stepping Meter were expressed as a percentage of the valid value of the Digiwalker step counts. The criterion used to determine if a Stepping Meter was valid was a maximum deviation of 10% from the Digiwalker step counts. Results A total of 252 (25.9%) Stepping Meters met the criterion, whereas 74.1% made an overestimation or underestimation of more than 10%. In more than one third (36.6%) of the invalid Stepping Meters, the deviation was greater than 50%. Most (64.8%) of the invalid pedometers overestimated the actual steps taken. Conclusions Inexpensive Stepping Meters cannot be used in community interventions as they will give participants the wrong message. PMID:16790485

  18. Fragment Production and Survival in Irradiated Disks: A Comprehensive Cooling Criterion

    NASA Astrophysics Data System (ADS)

    Kratter, Kaitlin M.; Murray-Clay, Ruth A.

    2011-10-01

    Accretion disks that become gravitationally unstable can fragment into stellar or substellar companions. The formation and survival of these fragments depends on the precarious balance between self-gravity, internal pressure, tidal shearing, and rotation. Disk fragmentation depends on two key factors: (1) whether the disk can get to the fragmentation boundary of Q = 1 and (2) whether fragments can survive for many orbital periods. Previous work suggests that to reach Q = 1, and have fragments survive, a disk must cool on an orbital timescale. Here we show that disks heated primarily by external irradiation always satisfy the standard cooling time criterion. Thus, even though irradiation heats disks and makes them more stable in general, once they reach the fragmentation boundary, they fragment more easily. We derive a new cooling criterion that determines fragment survival and calculate a pressure-modified Hill radius, which sets the maximum size of pressure-supported objects in a Keplerian disk. We conclude that fragmentation in protostellar disks might occur at slightly smaller radii than previously thought and recommend tests for future simulations that will better predict the outcome of fragmentation in real disks.

  19. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  1. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    NASA Astrophysics Data System (ADS)

    Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.

    2016-03-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  2. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    NASA Technical Reports Server (NTRS)

    Wakeford, H. R.; Sing, D.K.; Deming, D.; Mandell, A.

    2016-01-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 microns probe primarily the H2O absorption band at 1.4 microns, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as "ramp" probability (R (sub p)) divided by "ramp" total (R (sub asterisk)), which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts delta (sub lambda) times lambda) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  3. Damage Propagation Modeling for Aircraft Engine Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil

    2008-01-01

    This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.

  4. Large-scale glacitectonic deformation in response to active ice sheet retreat across Dogger Bank (southern central North Sea) during the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Phillips, Emrys; Cotterill, Carol; Johnson, Kirstin; Crombie, Kirstin; James, Leo; Carr, Simon; Ruiter, Astrid

    2018-01-01

    High resolution seismic data from the Dogger Bank in the central southern North Sea has revealed that the Dogger Bank Formation records a complex history of sedimentation and penecontemporaneous, large-scale, ice-marginal to proglacial glacitectonic deformation. These processes led to the development of a large thrust-block moraine complex which is buried beneath a thin sequence of Holocene sediments. This buried glacitectonic landsystem comprises a series of elongate, arcuate moraine ridges (200 m up to > 15 km across; over 40-50 km long) separated by low-lying ice marginal to proglacial sedimentary basins and/or meltwater channels, preserving the shape of the margin of this former ice sheet. The moraines are composed of highly deformed (folded and thrust) Dogger Bank Formation with the lower boundary of the deformed sequence (up to 40-50 m thick) being marked by a laterally extensive décollement. The ice-distal parts of the thrust moraine complex are interpreted as a "forward" propagating imbricate thrust stack developed in response to S/SE-directed ice-push. The more complex folding and thrusting within the more ice-proximal parts of the thrust-block moraines record the accretion of thrust slices of highly deformed sediment as the ice repeatedly reoccupied this ice marginal position. Consequently, the internal structure of the Dogger Bank thrust-moraine complexes can be directly related to ice sheet dynamics, recording the former positions of a highly dynamic, oscillating Weichselian ice sheet margin as it retreated northwards at the end of the Last Glacial Maximum.

  5. Mercury in Indiana watersheds: retrospective for 2001-2006

    USGS Publications Warehouse

    Risch, Martin R.; Baker, Nancy T.; Fowler, Kathleen K.; Egler, Amanda L.; Lampe, David C.

    2010-01-01

    Information about total mercury and methylmercury concentrations in water samples and mercury concentrations in fish-tissue samples was summarized for 26 watersheds in Indiana that drain most of the land area of the State. Mercury levels were interpreted with information on streamflow, atmospheric mercury deposition, mercury emissions to the atmosphere, mercury in wastewater, and landscape characteristics. Unfiltered total mercury concentrations in 411 water samples from streams in the 26 watersheds had a median of 2.32 nanograms per liter (ng/L) and a maximum of 28.2 ng/L. When these concentrations were compared to Indiana water-quality criteria for mercury, 5.4 percent exceeded the 12-ng/L chronic-aquatic criterion, 59 percent exceeded the 1.8-ng/L Great Lakes human-health criterion, and 72.5 percent exceeded the 1.3-ng/L Great Lakes wildlife criterion. Mercury concentrations in water were related to streamflow, and the highest mercury concentrations were associated with the highest streamflows. On average, 67 percent of total mercury in streams was in a particulate form, and particulate mercury concentrations were significantly lower downstream from dams than at monitoring stations not affected by dams. Methylmercury is the organic fraction of total mercury and is the form of mercury that accumulates and magnifies in food chains. It is made from inorganic mercury by natural processes under specific conditions. Unfiltered methylmercury concentrations in 411 water samples had a median of 0.10 ng/L and a maximum of 0.66 ng/L. Methylmercury was a median 3.7 percent and maximum 64.8 percent of the total mercury in 252 samples for which methylmercury was reported. The percentages of methylmercury in water samples were significantly higher downstream from dams than at other monitoring stations. Nearly all of the total mercury detected in fish tissue was assumed to be methylmercury. Fish-tissue samples from the 26 watersheds had wet-weight mercury concentrations that exceeded the 0.3 milligram per kilogram (mg/kg) U.S. Environmental Protection Agency (USEPA) methylmercury criterion in 12.4 percent of the 1,731 samples. The median wet-weight concentration in the fish-tissue samples was 0.13 mg/kg, and the maximum was 1.07 mg/kg. A coarse-scale analysis of all fish-tissue data in each watershed and a fine-scale analysis of data within 5 kilometers (km) of the downstream end of each watershed showed similar results overall. Mercury concentrations in fish-tissue samples were highest in the White River watershed in southern Indiana and the Fall Creek watershed in central Indiana. In fish-tissue samples within 5 km of the downstream end of a watershed, the USEPA methylmercury criterion was exceeded by 45 percent of mercury concentrations from the White River watershed and 40 percent of the mercury concentration from the Fall Creek watershed. A clear relation between mercury concentrations in fish-tissue samples and methylmercury concentrations in water was not observed in the data from watersheds in Indiana. Average annual atmospheric mercury wet-deposition rates were mapped with data at 156 locations in Indiana and four surrounding states for 2001-2006. These maps revealed an area in southeastern Indiana with high mercury wet-deposition rates-from 15 to 19 micrograms per square meter per year (ug/m2/yr). Annual atmospheric mercury dry-deposition rates were estimated with an inferential method by using concentrations of mercury species in air samples at three locations in Indiana. Mercury dry deposition-rates were 5.6 to 13.6 ug/m2/yr and were 0.49 to 1.4 times mercury wet-deposition rates. Total mercury concentrations were detected in 96 percent of 402 samples of wastewater effluent from 50 publicly owned treatment works in the watersheds; the median concentration was 3.0 ng/L, and the maximum was 88 ng/L. When these concentrations were compared to Indiana water-quality criteria for mercury, 12 percent exceeded the 12-n

  6. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  7. [Effect of nasal CPAP on human diaphragm position and lung volume].

    PubMed

    Yoshimura, N; Abe, T; Kusuhara, N; Tomita, T

    1994-11-01

    The cephalic margin of the zone of apposition (ZOA) was observed with ultrasonography at ambient pressure and during nasal continuous positive airway pressure (nasal CPAP) in nine awake healthy males in a supine position. In a relaxed state at ambient pressure, there was a significant (p < 0.001) linear relationship between lung volume and the movement of the cephalic margin of the ZOA over the range from maximum expiratory position (MEP) to maximum inspiratory position (MIP). With nasal CPAP, functional residual capacity increased significantly (p < 0.01) in proportion to the increase in CPAP. At 20 cmH2O CPAP, the mean increase in volume at end expiration was 36% of the vital capacity measured at ambient pressure. The cephalic margin of the ZOA moved significantly (p < 0.01) in a caudal direction as CPAP was increased. At 20 cmH2O CPAP, the cephalic margin of the ZOA at end expiratory position (EEP) had moved 55% of the difference from MIP to MEP measured at ambient pressure. The end expiratory diaphragm position during nasal CPAP was lower than the diaphragm position at ambient pressure when lung volumes were equal. These results suggest that during nasal CPAP the chest wall is distorted from its relaxed configuration, with a decrease in rib cage expansion and an increase in outward displacement of the abdominal wall.

  8. Molecular systematics of terraranas (Anura: Brachycephaloidea) with an assessment of the effects of alignment and optimality criteria.

    PubMed

    Padial, José M; Grant, Taran; Frost, Darrel R

    2014-06-26

    Brachycephaloidea is a monophyletic group of frogs with more than 1000 species distributed throughout the New World tropics, subtropics, and Andean regions. Recently, the group has been the target of multiple molecular phylogenetic analyses, resulting in extensive changes in its taxonomy. Here, we test previous hypotheses of phylogenetic relationships for the group by combining available molecular evidence (sequences of 22 genes representing 431 ingroup and 25 outgroup terminals) and performing a tree-alignment analysis under the parsimony optimality criterion using the program POY. To elucidate the effects of alignment and optimality criterion on phylogenetic inferences, we also used the program MAFFT to obtain a similarity-alignment for analysis under both parsimony and maximum likelihood using the programs TNT and GARLI, respectively. Although all three analytical approaches agreed on numerous points, there was also extensive disagreement. Tree-alignment under parsimony supported the monophyly of the ingroup and the sister group relationship of the monophyletic marsupial frogs (Hemiphractidae), while maximum likelihood and parsimony analyses of the MAFFT similarity-alignment did not. All three methods differed with respect to the position of Ceuthomantis smaragdinus (Ceuthomantidae), with tree-alignment using parsimony recovering this species as the sister of Pristimantis + Yunganastes. All analyses rejected the monophyly of Strabomantidae and Strabomantinae as originally defined, and the tree-alignment analysis under parsimony further rejected the recently redefined Craugastoridae and Pristimantinae. Despite the greater emphasis in the systematics literature placed on the choice of optimality criterion for evaluating trees than on the choice of method for aligning DNA sequences, we found that the topological differences attributable to the alignment method were as great as those caused by the optimality criterion. Further, the optimal tree-alignment indicates that insertions and deletions occurred in twice as many aligned positions as implied by the optimal similarity-alignment, confirming previous findings that sequence turnover through insertion and deletion events plays a greater role in molecular evolution than indicated by similarity-alignments. Our results also provide a clear empirical demonstration of the different effects of wildcard taxa produced by missing data in parsimony and maximum likelihood analyses. Specifically, maximum likelihood analyses consistently (81% bootstrap frequency) provided spurious resolution despite a lack of evidence, whereas parsimony correctly depicted the ambiguity due to missing data by collapsing unsupported nodes. We provide a new taxonomy for the group that retains previously recognized Linnaean taxa except for Ceuthomantidae, Strabomantidae, and Strabomantinae. A phenotypically diagnosable superfamily is recognized formally as Brachycephaloidea, with the informal, unranked name terrarana retained as the standard common name for these frogs. We recognize three families within Brachycephaloidea that are currently diagnosable solely on molecular grounds (Brachycephalidae, Craugastoridae, and Eleutherodactylidae), as well as five subfamilies (Craugastorinae, Eleutherodactylinae, Holoadeninae, Phyzelaphryninae, and Pristimantinae) corresponding in large part to previous families and subfamilies. Our analyses upheld the monophyly of all tested genera, but we found numerous subgeneric taxa to be non-monophyletic and modified the taxonomy accordingly.

  9. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    NASA Astrophysics Data System (ADS)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  10. A Comparison of Athletic Movement Among Talent-Identified Juniors From Different Football Codes in Australia: Implications for Talent Development.

    PubMed

    Woods, Carl T; Keller, Brad S; McKeown, Ian; Robertson, Sam

    2016-09-01

    Woods, CT, Keller, BS, McKeown, I, and Robertson, S. A comparison of athletic movement among talent-identified juniors from different football codes in Australia: implications for talent development. J Strength Cond Res 30(9): 2440-2445, 2016-This study aimed to compare the athletic movement skill of talent-identified (TID) junior Australian Rules football (ARF) and soccer players. The athletic movement skill of 17 TID junior ARF players (17.5-18.3 years) was compared against 17 TID junior soccer players (17.9-18.7 years). Players in both groups were members of an elite junior talent development program within their respective football codes. All players performed an athletic movement assessment that included an overhead squat, double lunge, single-leg Romanian deadlift (both movements performed on right and left legs), a push-up, and a chin-up. Each movement was scored across 3 essential assessment criteria using a 3-point scale. The total score for each movement (maximum of 9) and the overall total score (maximum of 63) were used as the criterion variables for analysis. A multivariate analysis of variance tested the main effect of football code (2 levels) on the criterion variables, whereas a 1-way analysis of variance identified where differences occurred. A significant effect was noted, with the TID junior ARF players outscoring their soccer counterparts when performing the overhead squat and push-up. No other criterions significantly differed according to the main effect. Practitioners should be aware that specific sporting requirements may incur slight differences in athletic movement skill among TID juniors from different football codes. However, given the low athletic movement skill noted in both football codes, developmental coaches should address the underlying movement skill capabilities of juniors when prescribing physical training in both codes.

  11. Crack Growth Prediction Methodology for Multi-Site Damage: Layered Analysis and Growth During Plasticity

    NASA Technical Reports Server (NTRS)

    James, Mark Anthony

    1999-01-01

    A finite element program has been developed to perform quasi-static, elastic-plastic crack growth simulations. The model provides a general framework for mixed-mode I/II elastic-plastic fracture analysis using small strain assumptions and plane stress, plane strain, and axisymmetric finite elements. Cracks are modeled explicitly in the mesh. As the cracks propagate, automatic remeshing algorithms delete the mesh local to the crack tip, extend the crack, and build a new mesh around the new tip. State variable mapping algorithms transfer stresses and displacements from the old mesh to the new mesh. The von Mises material model is implemented in the context of a non-linear Newton solution scheme. The fracture criterion is the critical crack tip opening displacement, and crack direction is predicted by the maximum tensile stress criterion at the crack tip. The implementation can accommodate multiple curving and interacting cracks. An additional fracture algorithm based on nodal release can be used to simulate fracture along a horizontal plane of symmetry. A core of plane strain elements can be used with the nodal release algorithm to simulate the triaxial state of stress near the crack tip. Verification and validation studies compare analysis results with experimental data and published three-dimensional analysis results. Fracture predictions using nodal release for compact tension, middle-crack tension, and multi-site damage test specimens produced accurate results for residual strength and link-up loads. Curving crack predictions using remeshing/mapping were compared with experimental data for an Arcan mixed-mode specimen. Loading angles from 0 degrees to 90 degrees were analyzed. The maximum tensile stress criterion was able to predict the crack direction and path for all loading angles in which the material failed in tension. Residual strength was also accurately predicted for these cases.

  12. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, C. D.; Betti, R.; Departments of Mechanical Engineering and Physics and Astronomy, University of Rochester, Rochester, New York 14623

    2008-10-15

    It is shown that the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form dependent on the only two parameters of the compressed fuel assembly that can be measured with existing techniques: the hot spot ion temperature (T{sub i}{sup h}) and the total areal density ({rho}R{sub tot}), which includes the cold shell contribution. A marginal ignition curve is derived in the {rho}R{sub tot}, T{sub i}{sup h} plane and current implosion experiments are compared with the ignition curve. On this plane, hydrodynamic equivalent curves show how a given implosion would perform with respect to themore » ignition condition when scaled up in the laser-driver energy. For 3<{sub n}<6 keV, an approximate form of the ignition condition (typical of laser-driven ICF) is {sub n}{sup 2.6}{center_dot}<{rho}R{sub tot}>{sub n}>50 keV{sup 2.6}{center_dot} g/cm{sup 2}, where <{rho}R{sub tot}>{sub n} and {sub n} are the burn-averaged total areal density and hot spot ion temperature, respectively. Both quantities are calculated without accounting for the alpha-particle energy deposition. Such a criterion can be used to determine how surrogate D{sub 2} and subignited DT target implosions perform with respect to the one-dimensional ignition threshold.« less

  13. Multilayer Disk Reduced Interlayer Crosstalk with Wide Disk-Fabrication Margin

    NASA Astrophysics Data System (ADS)

    Hirotsune, Akemi; Miyauchi, Yasushi; Endo, Nobumasa; Onuma, Tsuyoshi; Anzai, Yumiko; Kurokawa, Takahiro; Ushiyama, Junko; Shintani, Toshimichi; Sugiyama, Toshinori; Miyamoto, Harukazu

    2008-07-01

    To reduce interlayer crosstalk caused by the ghost spot which appears in a multilayer optical disk with more than three information layers, a multilayer disk structure which reduces interlayer crosstalk with a wide disk-fabrication margin was proposed in which the backward reflectivity of the information layers is sufficiently low. It was confirmed that the interlayer crosstalk caused by the ghost spot was reduced to less than the crosstalk from the adjacent layer by controlling backward reflectivity. The wide disk-fabrication margin of the proposed disk structure was indicated by experimentally confirming that the tolerance of the maximum deviation of the spacer-layer thickness is four times larger than that in the previous multilayer disk.

  14. From hyperextended rift to convergent margin types: mapping the outer limit of the extended Continental Shelf of Spain in the Galicia area according UNCLOS Art. 76

    NASA Astrophysics Data System (ADS)

    Somoza, Luis; Medialdea, Teresa; Vázquez, Juan T.; González, Francisco J.; León, Ricardo; Palomino, Desiree; Fernández-Salas, Luis M.; Rengel, Juan

    2017-04-01

    Spain presented on 11 May 2009 a partial submission for delimiting the extended Continental Shelf in respect to the area of Galicia to the Commission on the Limits of the Continental Shelf (CLCS). The Galicia margin represents an example of the transition between two different types of continental margins (CM): a western hyperpextended margin and a northern convergent margin in the Bay of Biscay. The western Galicia Margin (wGM 41° to 43° N) corresponds to a hyper-extended rifted margin as result of the poly-phase development of the Iberian-Newfoundland conjugate margin during the Mesozoic. Otherwise, the north Galicia Margin (nGM) is the western end of the Cenozoic subduction of the Bay of Biscay along the north Iberian Margin (NIM) linked to the Pyrenean-Mediterranean collisional belt Following the procedure established by the CLCS Scientific and Technical Guidelines (CLCS/11), the points of the Foot of Slope (FoS) has to be determined as the points of maximum change in gradient in the region defined as the Base of the continental Slope (BoS). Moreover, the CLCS guidelines specify that the BoS should be contained within the continental margin (CM). In this way, a full-coverage multibeam bathymetry and an extensive dataset of up 4,736 km of multichannel seismic profiles were expressly obtained during two oceanographic surveys (Breogham-2005 and Espor-2008), aboard the Spanish research vessel Hespérides, to map the outer limit of the CM.In order to follow the criteria of the CLCS guidelines, two types of models reported in the CLCS Guidelines were applied to the Galicia Margin. In passive margins, the Commission's guidelines establish that the natural prolongation is based on that "the natural process by which a continent breaks up prior to the separation by seafloor spreading involves thinning, extension and rifting of the continental crust…" (para. 7.3, CLCS/11). The seaward extension of the wGM should include crustal continental blocks and the so-called Peridotite Ridge (PR), composed by serpentinized exhumed continental mantle. Thus, the PR should be regarded as a natural component of the continental margin since these seafloor highs were formed by hyperextension of the margin. Regarding convergent margins, the architecture of the nGM can be classified according the CLCS/11 as a "poor- or non-accretionary convergent continental margin" characterized by a poorly developed accretionary wedge, which is composed of: a large sedimentary apron mainly formed by large slumps and thrust wedges of igneous (ophiolitic/continental) body overlying subducting oceanic crust (Fig. 6.1B, CLCS/11). According to para. 6.3.6. (CLCS/11), the seaward extent of this type of continental convergent margins is defined by the seaward edge of the accretionary wedge. Applying this definition, the seaward extent of the margin is defined by the outer limit of the ophiolitic deformed body that marks the edge of the accretionary wedge. These geological criteria were strictly applied for mapping the BoS region, where the FoS were determinate by using the maximum change in gradient within this mapped region. Acknowledgments: Project for the Extension of the Spanish Continental according UNCLOS (CTM2010-09496-E) and Project CTM2016-75947-R

  15. Comparisons of maximum deformation and failure forces at the implant–abutment interface of titanium implants between titanium-alloy and zirconia abutments with two levels of marginal bone loss

    PubMed Central

    2013-01-01

    Background Zirconia materials are known for their optimal aesthetics, but they are brittle, and concerns remain about whether their mechanical properties are sufficient for withstanding the forces exerted in the oral cavity. Therefore, this study compared the maximum deformation and failure forces of titanium implants between titanium-alloy and zirconia abutments under oblique compressive forces in the presence of two levels of marginal bone loss. Methods Twenty implants were divided into Groups A and B, with simulated bone losses of 3.0 and 1.5 mm, respectively. Groups A and B were also each divided into two subgroups with five implants each: (1) titanium implants connected to titanium-alloy abutments and (2) titanium implants connected to zirconia abutments. The maximum deformation and failure forces of each sample was determined using a universal testing machine. The data were analyzed using the nonparametric Mann–Whitney test. Results The mean maximum deformation and failure forces obtained the subgroups were as follows: A1 (simulated bone loss of 3.0 mm, titanium-alloy abutment) = 540.6 N and 656.9 N, respectively; A2 (simulated bone loss of 3.0 mm, zirconia abutment) = 531.8 N and 852.7 N; B1 (simulated bone loss of 1.5 mm, titanium-alloy abutment) = 1070.9 N and 1260.2 N; and B2 (simulated bone loss of 1.5 mm, zirconia abutment) = 907.3 N and 1182.8 N. The maximum deformation force differed significantly between Groups B1 and B2 but not between Groups A1 and A2. The failure force did not differ between Groups A1 and A2 or between Groups B1 and B2. The maximum deformation and failure forces differed significantly between Groups A1 and B1 and between Groups A2 and B2. Conclusions Based on this experimental study, the maximum deformation and failure forces are lower for implants with a marginal bone loss of 3.0 mm than of 1.5 mm. Zirconia abutments can withstand physiological occlusal forces applied in the anterior region. PMID:23688204

  16. Late Wisconsinan glaciation and postglacial relative sea-level change on western Banks Island, Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Lakeman, Thomas R.; England, John H.

    2013-07-01

    The study revises the maximum extent of the northwest Laurentide Ice Sheet (LIS) in the western Canadian Arctic Archipelago (CAA) during the last glaciation and documents subsequent ice sheet retreat and glacioisostatic adjustments across western Banks Island. New geomorphological mapping and maximum-limiting radiocarbon ages indicate that the northwest LIS inundated western Banks Island after ~ 31 14C ka BP and reached a terminal ice margin west of the present coastline. The onset of deglaciation and the age of the marine limit (22-40 m asl) are unresolved. Ice sheet retreat across western Banks Island was characterized by the withdrawal of a thin, cold-based ice margin that reached the central interior of the island by ~ 14 cal ka BP. The elevation of the marine limit is greater than previously recognized and consistent with greater glacioisostatic crustal unloading by a more expansive LIS. These results complement emerging bathymetric observations from the Arctic Ocean, which indicate glacial erosion during the Last Glacial Maximum (LGM) to depths of up to 450 m.

  17. Learning monopolies with delayed feedback on price expectations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akio; Szidarovszky, Ferenc

    2015-11-01

    We call the intercept of the price function with the vertical axis the maximum price and the slope of the price function the marginal price. In this paper it is assumed that a monopolistic firm has full information about the marginal price and its own cost function but is uncertain on the maximum price. However, by repeated interaction with the market, the obtained price observations give a basis for an adaptive learning process of the maximum price. It is also assumed that the price observations have fixed delays, so the learning process can be described by a delayed differential equation. In the cases of one or two delays, the asymptotic behavior of the resulting dynamic process is examined, stability conditions are derived. Three main results are demonstrated in the two delay learning processes. First, it is possible to stabilize the equilibrium which is unstable in the one delay model. Second, complex dynamics involving chaos, which is impossible in the one delay model, can emerge. Third, alternations of stability and instability (i.e., stability switches) occur repeatedly.

  18. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS Acknowlegement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  19. The test-retest reliability and criterion validity of a high-intensity, netball-specific circuit test: The Net-Test.

    PubMed

    Mungovan, Sean F; Peralta, Paula J; Gass, Gregory C; Scanlan, Aaron T

    2018-04-12

    To examine the test-retest reliability and criterion validity of a high-intensity, netball-specific fitness test. Repeated measures, within-subject design. Eighteen female netball players competing in an international competition completed a trial of the Net-Test, which consists of 14 timed netball-specific movements. Players also completed a series of netball-relevant criterion fitness tests. Ten players completed an additional Net-Test trial one week later to assess test-retest reliability using intraclass correlation coefficient (ICC), typical error of measurement (TEM), and coefficient of variation (CV). The typical error of estimate expressed as CV and Pearson correlations were calculated between each criterion test and Net-Test performance to assess criterion validity. Five movements during the Net-Test displayed moderate ICC (0.84-0.90) and two movements displayed high ICC (0.91-0.93). Seven movements and heart rate taken during the Net-Test held low CV (<5%) with values ranging from 1.7 to 9.5% across measures. Total time (41.63±2.05s) during the Net-Test possessed low CV and significant (p<0.05) correlations with 10m sprint time (1.98±0.12s; CV=4.4%, r=0.72), 20m sprint time (3.38±0.19s; CV=3.9%, r=0.79), 505 Change-of-Direction time (2.47±0.08s; CV=2.0%, r=0.80); and maximum oxygen uptake (46.59±2.58 mLkg -1 min -1 ; CV=4.5%, r=-0.66). The Net-Test possesses acceptable reliability for the assessment of netball fitness. Further, the high criterion validity for the Net-Test suggests a range of important netball-specific fitness elements are assessed in combination. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Evaluation and Optimization of Therapeutic Footwear for Neuropathic Diabetic Foot Patients Using In-Shoe Plantar Pressure Analysis

    PubMed Central

    Bus, Sicco A.; Haspels, Rob; Busch-Westbroek, Tessa E.

    2011-01-01

    OBJECTIVE Therapeutic footwear for diabetic foot patients aims to reduce the risk of ulceration by relieving mechanical pressure on the foot. However, footwear efficacy is generally not assessed in clinical practice. The purpose of this study was to assess the value of in-shoe plantar pressure analysis to evaluate and optimize the pressure-reducing effects of diabetic therapeutic footwear. RESEARCH DESIGN AND METHODS Dynamic in-shoe plantar pressure distribution was measured in 23 neuropathic diabetic foot patients wearing fully customized footwear. Regions of interest (with peak pressure >200 kPa) were selected and targeted for pressure optimization by modifying the shoe or insole. After each of a maximum of three rounds of modifications, the effect on in-shoe plantar pressure was measured. Successful optimization was achieved with a peak pressure reduction of >25% (criterion A) or below an absolute level of 200 kPa (criterion B). RESULTS In 35 defined regions, mean peak pressure was significantly reduced from 303 (SD 77) to 208 (46) kPa after an average 1.6 rounds of footwear modifications (P < 0.001). This result constitutes a 30.2% pressure relief (range 18–50% across regions). All regions were successfully optimized: 16 according to criterion A, 7 to criterion B, and 12 to criterion A and B. Footwear optimization lasted on average 53 min. CONCLUSIONS These findings suggest that in-shoe plantar pressure analysis is an effective and efficient tool to evaluate and guide footwear modifications that significantly reduce pressure in the neuropathic diabetic foot. This result provides an objective approach to instantly improve footwear quality, which should reduce the risk for pressure-related plantar foot ulcers. PMID:21610125

  1. Criterion-based laparoscopic training reduces total training time.

    PubMed

    Brinkman, Willem M; Buzink, Sonja N; Alevizos, Leonidas; de Hingh, Ignace H J T; Jakimowicz, Jack J

    2012-04-01

    The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. During four training sessions within 1 week (one session per day) 34 medical interns (no laparoscopic experience) practiced on two basic tasks on the Simbionix LAP Mentor virtual-reality (VR) simulator: 'clipping and grasping' and 'cutting'. Group C (criterion-based) (N = 17) trained to reach predefined criteria and stopped training in each session when these criteria were met, with a maximum training time of 1 h. Group T (time-based) (N = 17) trained for a fixed time of 1 h each session. Retention of skills was assessed 1 week after training. In addition, transferability of skills was established using the Haptica ProMIS augmented-reality simulator. Both groups improved their performance significantly over the course of the training sessions (Wilcoxon signed ranks, P < 0.05). Both groups showed skill transferability and skill retention. When comparing the performance parameters of group C and group T, their performances in the first, the last and the retention training sessions did not differ significantly (Mann-Whitney U test, P > 0.05). The average number of repetitions needed to meet the criteria also did not differ between the groups. Overall, group C spent less time training on the simulator than did group T (74:48 and 120:10 min, respectively; P < 0.001). Group C performed significantly fewer repetitions of each task, overall and in session 2, 3 and 4. Criterion-based training of basic laparoscopic skills can reduce the overall training time with no impact on training outcome, transferability or retention of skills. Criterion-based should be the training of choice in laparoscopic skills curricula.

  2. Revealing nonclassicality beyond Gaussian states via a single marginal distribution

    PubMed Central

    Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul

    2017-01-01

    A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential—a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis–independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement. PMID:28077456

  3. Revealing nonclassicality beyond Gaussian states via a single marginal distribution.

    PubMed

    Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul

    2017-01-31

    A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential-a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis-independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement.

  4. Application of various FLD modelling approaches

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  5. Fracture mechanics in fiber reinforced composite materials, taking as examples B/A1 and CRFP

    NASA Technical Reports Server (NTRS)

    Peters, P. W. M.

    1982-01-01

    The validity of linear elastic fracture mechanics and other fracture criteria was investigated with laminates of boron fiber reinforced aluminum (R/A1) and of carbon fiber reinforced epoxide (CFRP). Cracks are assessed by fracture strength Kc or Kmax (critical or maximum value of the stress intensity factor). The Whitney and Nuismer point stress criterion and average stress criterion often show that Kmax of fiber composite materials increases with increasing crack length; however, for R/A1 and CFRP the curve showing fracture strength as a function of crack length is only applicable in a small domain. For R/A1, the reason is clearly the extension of the plastic zone (or the damage zone n the case of CFRP) which cannot be described with a stress intensity factor.

  6. Failure prediction for the optimization of stretch forming aluminium-polymer laminate foils used for pharmaceutical packaging

    NASA Astrophysics Data System (ADS)

    Müller, Simon; Weygand, Sabine M.

    2018-05-01

    Axisymmetric stretch forming processes of aluminium-polymer laminate foils (e.g. consisting of PA-Al-PVC layers) are analyzed numerically by finite element modeling of the multi-layer material as well as experimentally in order to identify a suitable damage initiation criterion. A simple ductile fracture criterion is proposed to predict the forming limits. The corresponding material constants are determined from tensile tests and then applied in forming simulations with different punch geometries. A comparison between the simulations and the experimental results shows that the determined failure constants are not applicable. Therefore, one forming experiment was selected and in the corresponding simulation the failure constant was fitted to its measured maximum stretch. With this approach it is possible to predict the forming limit of the laminate foil with satisfying accuracy for different punch geometries.

  7. Effect of Blood Contamination on Marginal Adaptation and Surface Microstructure of Mineral Trioxide Aggregate: A SEM Study.

    PubMed

    Salem Milani, Amin; Rahimi, Saeed; Froughreyhani, Mohammad; Vahid Pakdel, Mahdi

    2013-01-01

    In various clinical situations, mineral trioxide aggregate (MTA) may come into direct contact or even be mixed with blood. The aim of the present study was to evaluate the effect of exposure to blood on marginal adaptation and surface microstructure of MTA. Thirty extracted human single-rooted teeth were used. Standard root canal treatment was carried out. Root-ends were resected, and retrocavities were prepared. The teeth were randomly divided into two groups (n = 15): in group 1, the internal surface of the cavities was coated with fresh blood. Then, the cavities were filled with MTA. The roots were immersed in molds containing fresh blood. In group 2, the aforementioned procedures were performed except that synthetic tissue fluid (STF) was used instead of blood. To assess the marginal adaptation, "gap perimeter" and "maximum gap width" were measured under scanning electron microscope. The surface microstructure was also examined. Independent samples t-test and Mann-Whitney U test were used to analyze the data. Maximum gap width and gap perimeter in the blood-exposed group were significantly larger than those in the STF-exposed group (p < 0.01). In the blood-exposed group, the crystals tended to be more rounded and less angular compared with the STF-exposed group, and there was a general lack of needle-like crystals. Exposure to blood during setting has a negative effect on marginal adaptation of MTA, and blood-exposed MTA has a different surface microstructure compared to STF-exposed MTA.

  8. Adaptive Quadrature for Item Response Models. Research Report. ETS RR-06-29

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2006-01-01

    Adaptive quadrature is applied to marginal maximum likelihood estimation for item response models with normal ability distributions. Even in one dimension, significant gains in speed and accuracy of computation may be achieved.

  9. Maximum margin multiple instance clustering with applications to image and text clustering.

    PubMed

    Zhang, Dan; Wang, Fei; Si, Luo; Li, Tao

    2011-05-01

    In multiple instance learning problems, patterns are often given as bags and each bag consists of some instances. Most of existing research in the area focuses on multiple instance classification and multiple instance regression, while very limited work has been conducted for multiple instance clustering (MIC). This paper formulates a novel framework, maximum margin multiple instance clustering (M(3)IC), for MIC. However, it is impractical to directly solve the optimization problem of M(3)IC. Therefore, M(3)IC is relaxed in this paper to enable an efficient optimization solution with a combination of the constrained concave-convex procedure and the cutting plane method. Furthermore, this paper presents some important properties of the proposed method and discusses the relationship between the proposed method and some other related ones. An extensive set of empirical results are shown to demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency.

  10. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  11. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  12. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    PubMed Central

    Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong

    2013-01-01

    This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875

  13. Support Vector Machines for Differential Prediction

    PubMed Central

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    2015-01-01

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction. In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results. PMID:26158123

  14. Support Vector Machines for Differential Prediction.

    PubMed

    Kuusisto, Finn; Santos Costa, Vitor; Nassif, Houssam; Burnside, Elizabeth; Page, David; Shavlik, Jude

    Machine learning is continually being applied to a growing set of fields, including the social sciences, business, and medicine. Some fields present problems that are not easily addressed using standard machine learning approaches and, in particular, there is growing interest in differential prediction . In this type of task we are interested in producing a classifier that specifically characterizes a subgroup of interest by maximizing the difference in predictive performance for some outcome between subgroups in a population. We discuss adapting maximum margin classifiers for differential prediction. We first introduce multiple approaches that do not affect the key properties of maximum margin classifiers, but which also do not directly attempt to optimize a standard measure of differential prediction. We next propose a model that directly optimizes a standard measure in this field, the uplift measure. We evaluate our models on real data from two medical applications and show excellent results.

  15. Test of the Hill Stability Criterion against Chaos Indicators

    NASA Astrophysics Data System (ADS)

    Satyal, Suman; Quarles, Billy; Hinse, Tobias

    2012-10-01

    The efficacy of Hill Stability (HS) criterion is tested against other known chaos indicators such as Maximum Lyapunov Exponents (MLE) and Mean Exponential Growth of Nearby Orbits (MEGNO) maps. First, orbits of four observationally verified binary star systems: γ Cephei, Gliese-86, HD41004, and HD196885 are integrated using standard integration packages (MERCURY, SWIFTER, NBI, C/C++). The HS which measures orbital perturbation of a planet around the primary star due to the secondary star is calculated for each system. The LEs spectra are generated to measure the divergence/convergence rate of stable manifolds and the MEGNO maps are generated by using the variational equations of the system during the integration process. These maps allow to accurately differentiate between stable and unstable dynamical systems. Then the results obtained from the analysis of HS, MLE, and MEGNO maps are checked for their dynamical variations and resemblance. The HS of most of the planets seems to be stable, quasi-periodic for at least ten million years. The MLE and the MEGNO maps also indicate the local quasi-periodicity and global stability in relatively short integration period. The HS criterion is found to be a comparably efficient tool to measure the stability of planetary orbits.

  16. Effect of Crystal Orientation on Analysis of Single-Crystal, Nickel-Based Turbine Blade Superalloys

    NASA Technical Reports Server (NTRS)

    Swanson, G. R.; Arakere, N. K.

    2000-01-01

    High-cycle fatigue-induced failures in turbine and turbopump blades is a pervasive problem. Single-crystal nickel turbine blades are used because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities. Single-crystal materials have highly orthotropic properties making the position of the crystal lattice relative to the part geometry a significant and complicating factor. A fatigue failure criterion based on the maximum shear stress amplitude on the 24 octahedral and 6 cube slip systems is presented for single-crystal nickel superalloys (FCC crystal). This criterion greatly reduces the scatter in uniaxial fatigue data for PWA 1493 at 1,200 F in air. Additionally, single-crystal turbine blades used in the Space Shuttle main engine high pressure fuel turbopump/alternate turbopump are modeled using a three-dimensional finite element (FE) model. This model accounts for material orthotrophy and crystal orientation. Fatigue life of the blade tip is computed using FE stress results and the failure criterion that was developed. Stress analysis results in the blade attachment region are also presented. Results demonstrate that control of crystallographic orientation has the potential to significantly increase a component's resistance to fatigue crack growth without adding additional weight or cost.

  17. Irwin's conjecture: Crack shape adaptability in transversely isotropic solids

    NASA Astrophysics Data System (ADS)

    Laubie, Hadrien; Ulm, Franz-Josef

    2014-08-01

    The planar crack propagation problem of a flat elliptical crack embedded in a brittle elastic anisotropic solid is investigated. We introduce the concept of crack shape adaptability: the ability of three-dimensional planar cracks to shape with the mechanical properties of a cracked body. A criterion based on the principle of maximum dissipation is suggested in order to determine the most stable elliptical shape. This criterion is applied to the specific case of vertical cracks in transversely isotropic solids. It is shown that contrary to the isotropic case, the circular shape (i.e. penny-shaped cracks) is not the most stable one. Upon propagation, the crack first grows non-self-similarly before it reaches a stable shape. This stable shape can be approximated by an ellipse of an aspect ratio that varies with the degree of elastic anisotropy. By way of example, we apply the so-derived crack shape adaptability criterion to shale materials. For this class of materials it is shown that once the stable shape is reached, the crack propagates at a higher rate in the horizontal direction than in the vertical direction. We also comment on the possible implications of these findings for hydraulic fracturing operations.

  18. Automated thematic mapping and change detection of ERTS-A images. [farmlands, cities, and mountain identification in Utah, Washington, Arizona, and California

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. A diffraction pattern analysis of MSS images led to the development of spatial signatures for farm land, urban areas and mountains. Four spatial features are employed to describe the spatial characteristics of image cells in the digital data. Three spectral features are combined with the spatial features to form a seven dimensional vector describing each cell. Then, the classification of the feature vectors is accomplished by using the maximum likelihood criterion. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month, but vary substantially between seasons. Three ERTS-1 images from the Phoenix, Arizona area were processed, and recognition rates between 85% and 100% were obtained for the terrain classes of desert, farms, mountains, and urban areas. To eliminate the need for training data, a new clustering algorithm has been developed. Seven ERTS-1 images from four test sites have been processed through the clustering algorithm, and high recognition rates have been achieved for all terrain classes.

  19. Torque Limits for Fasteners in Composites

    NASA Technical Reports Server (NTRS)

    Zhao, Yi

    2002-01-01

    The two major classes of laminate joints are bonded and bolted. Often the two classes are combined as bonded-bolted joints. Several characteristics of fiber reinforced composite materials render them more susceptible to joint problems than conventional metals. These characteristics include weakness in in-plane shear, transverse tension/compression, interlaminar shear, and bearing strength relative to the strength and stiffness in the fiber direction. Studies on bolted joints of composite materials have been focused on joining assembly subject to in-plane loads. Modes of failure under these loading conditions are net-tension failure, cleavage tension failure, shear-out failure, bearing failure, etc. Although the studies of torque load can be found in literature, they mainly discussed the effect of the torque load on in-plane strength. Existing methods for calculating torque limit for a mechanical fastener do not consider connecting members. The concern that a composite member could be crushed by a preload inspired the initiation of this study. The purpose is to develop a fundamental knowledge base on how to determine a torque limit when a composite member is taken into account. Two simplified analytical models were used: a stress failure analysis model based on maximum stress criterion, and a strain failure analysis model based on maximum strain criterion.

  20. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  1. An Observational and Analytical Study of Marginal Ice Zone Atmospheric Jets

    DTIC Science & Technology

    2016-12-01

    layer or in the capping temperature inversion just above. The three strongest jets had maximum wind speeds at elevations near 350 m to 400 m...geostrophic wind due to horizontal temperature changes in the atmospheric boundary layer and capping inversion . The jets were detected using...temperature inversion just above. The three strongest jets had maximum wind speeds at elevations near 350 m to 400 m elevation; one of these jets had a

  2. Monte Carlo simulations on marker grouping and ordering.

    PubMed

    Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C

    2003-08-01

    Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.

  3. Optimizing LED lighting for space plant growth unit: Joint effects of photon flux density, red to white ratios and intermittent light pulses

    NASA Astrophysics Data System (ADS)

    Avercheva, O. V.; Berkovich, Yu. A.; Konovalova, I. O.; Radchenko, S. G.; Lapach, S. N.; Bassarskaya, E. M.; Kochetova, G. V.; Zhigalova, T. V.; Yakovleva, O. S.; Tarakanov, I. G.

    2016-11-01

    The aim of this work were to choose a quantitative optimality criterion for estimating the quality of plant LED lighting regimes inside space greenhouses and to construct regression models of crop productivity and the optimality criterion depending on the level of photosynthetic photon flux density (PPFD), the proportion of the red component in the light spectrum and the duration of the duty cycle (Chinese cabbage Brassica chinensis L. as an example). The properties of the obtained models were described in the context of predicting crop dry weight and the optimality criterion behavior when varying plant lighting parameters. Results of the fractional 3-factor experiment demonstrated the share of the PPFD level participation in the crop dry weight accumulation was 84.4% at almost any combination of other lighting parameters, but when PPFD value increased up to 500 μmol m-2 s-1 the pulse light and supplemental light from red LEDs could additionally increase crop productivity. Analysis of the optimality criterion response to variation of lighting parameters showed that the maximum coordinates were the following: PPFD = 500 μmol m-2 s-1, about 70%-proportion of the red component of the light spectrum (PPFDLEDred/PPFDLEDwhite = 1.5) and the duty cycle with a period of 501 μs. Thus, LED crop lighting with these parameters was optimal for achieving high crop productivity and for efficient use of energy in the given range of lighting parameter values.

  4. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  5. Ice-Sheet Glaciation of the Puget lowland, Washington, during the Vashon Stade (late pleistocene)

    USGS Publications Warehouse

    Thorson, R.M.

    1980-01-01

    During the Vashon Stade of the Fraser Glaciation, about 15,000-13,000 yr B.P., a lobe of the Cordilleran Ice Sheet occupied the Puget lowland of western Washington. At its maximum extent about 14,000 yr ago, the ice sheet extended across the Puget lowland between the Cascade Range and Olympic Mountains and terminated about 80 km south of Seattle. Meltwater streams drained southwest to the Pacific Ocean and built broad outwash trains south of the ice margin. Reconstructed longitudinal profiles for the Puget lobe at its maximum extent are similar to the modern profile of Malaspina Glacier, Alaska, suggesting that the ice sheet may have been in a near-equilibrium state at the glacial maximum. Progressive northward retreat from the terminal zone was accompanied by the development of ice-marginal streams and proglacial lakes that drained southward during initial retreat, but northward during late Vashon time. Relatively rapid retreat of the Juan de Fuca lobe may have contributed to partial stagnation of the northwestern part of the Puget lobe. Final destruction of the Puget lobe occurred when the ice retreated north of Admiralty Inlet. The sea entered the Puget lowland at this time, allowing the deposition of glacial-marine sediments which now occur as high as 50 m altitude. These deposits, together with ice-marginal meltwater channels presumed to have formed above sea level during deglaciation, suggest that a significant amount of postglacial isostatic and(or) tectonic deformation has occurred in the Puget lowland since deglaciation. ?? 1980.

  6. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  7. Simulation of LV pacemaker lead in marginal vein: potential risk factors for acute dislodgement.

    PubMed

    Zhao, Xuefeng; Burger, Mike; Liu, Yi; Das, Mithilesh K; Combs, William; Wenk, Jonathan F; Guccione, Julius M; Kassab, Ghassan S

    2011-03-01

    Although left ventricular (LV) coronary sinus lead dislodgement remains a problem, the risk factors for dislodgement have not been clearly defined. In order to identify potential risk factors for acute lead dislodgement, we conducted dynamic finite element simulations of pacemaker lead dislodgement in marginal LV vein. We considered factors such as mismatch in lead and vein diameters, velocity of myocardial motion, branch angle between the insertion vein and the coronary sinus, degree of slack, and depth of insertion. The results show that large lead-to-vein diameter mismatch, rapid myocardial motion, and superficial insertion are potential risk factors for lead dislodgement. In addition, the degree of slack presents either a positive or negative effect on dislodgement risk depending on the branch angle. The prevention of acute lead dislodgment can be enforced by inducing as much static friction force as possible at the lead-vein interface, while reducing the external force. If the latter exceeds the former, dislodgement will occur. The present findings underscore the major risk factors for lead dislodgment, which may improve implantation criterion and future lead design.

  8. Polymerization shrinkage stress of composite resins and resin cements - What do we need to know?

    PubMed

    Soares, Carlos José; Faria-E-Silva, André Luis; Rodrigues, Monise de Paula; Vilela, Andomar Bruno Fernandes; Pfeifer, Carmem Silvia; Tantbirojn, Daranee; Versluis, Antheunis

    2017-08-28

    Polymerization shrinkage stress of resin-based materials have been related to several unwanted clinical consequences, such as enamel crack propagation, cusp deflection, marginal and internal gaps, and decreased bond strength. Despite the absence of strong evidence relating polymerization shrinkage to secondary caries or fracture of posterior teeth, shrinkage stress has been associated with post-operative sensitivity and marginal stain. The latter is often erroneously used as a criterion for replacement of composite restorations. Therefore, an indirect correlation can emerge between shrinkage stress and the longevity of composite restorations or resin-bonded ceramic restorations. The relationship between shrinkage and stress can be best studied in laboratory experiments and a combination of various methodologies. The objective of this review article is to discuss the concept and consequences of polymerization shrinkage and shrinkage stress of composite resins and resin cements. Literature relating to polymerization shrinkage and shrinkage stress generation, research methodologies, and contributing factors are selected and reviewed. Clinical techniques that could reduce shrinkage stress and new developments on low-shrink dental materials are also discussed.

  9. Along-strike supply of volcanic rifted margins: Implications for plume-influenced rifting and sudden along-strike transitions between volcanic and non-volcanic rifted margins

    NASA Astrophysics Data System (ADS)

    Ranero, C. R.; Phipps Morgan, J.

    2006-12-01

    The existence of sudden along-strike transitions between volcanic and non-volcanic rifted margins is an important constraint for conceptual models of rifting and continental breakup. We think there is a promising indirect approach to infer the maximum width of the region of upwelling that exists beneath a rifted margin during the transition from rifting to seafloor-spreading. We infer this width of ~30km from the minimum length of the ridge-offsets that mark the limits of the `region of influence' of on-ridge plumes on the axial relief, axial morphology, and crustal thickness along the ridge and at the terminations of fossil volcanic rifted margins. We adopt Vogt's [1972] hypothesis for along-ridge asthenospheric flow in a narrow vertical slot beneath the axis of plume-influenced `macro-segments' and volcanic rifted margins. We find that: (1) There is a threshold distance to the lateral offsets that bound plume-influenced macrosegments; all such `barrier offsets' are greater than ~30km, while smaller offsets do not appear to be a barrier to along-axis flow. This pattern is seen in the often abrupt transitions between volcanic and non-volcanic rifted margins; these transitions coincide with >30km ridge offsets that mark the boundary between the smooth seafloor morphology and thick crust of a plume- influenced volcanic margin and a neighboring non-volcanic margin, as recorded in 180Ma rifting of the early N. Atlantic, the 42Ma rifting of the Kerguelen-Broken Ridge, and the 66Ma Seychelles-Indian rifting in the Indian Ocean. (2) A similar pattern is seen in the often abrupt transitions between `normal' and plume-influenced mid- ocean ridge segments, which is discussed in a companion presentation by Phipps Morgan and Ranero (this meeting). (3) The coexistance of adjacent volcanic and non-volcanic rifted margin segments is readily explained in this conceptual framework. If the volcanic margin macrosegment is plume-fed by hot asthenosphere along an axial ridge slot, while adjacent non-volcanic margin segments stretch and upwell ambient cooler subcontinental mantle, then there will be a sudden transition from volcanic to non-volcanic margins across a transform offset. (4) A 30km width for the region of ridge upwelling and melting offers a simple conceptual explanation for the apparent 30km threshold length for the existence of strike-slip transform faults and the occurrence of non-transform offsets at smaller ridge offset-distances. (5) The conceptual model leads to the interpretation of the observed characteristic ~1000km-2000km-width of plume-influenced macro- segments as a measure of the maximum potential plume supply into a subaxial slot of 5-10 cubic km per yr. (6) If asthenosphere consumption by plate-spreading is less than plume-supply into a macro-segment, then the shallow seafloor and excess gravitational spreading stresses associated with a plume-influenced ridge can lead to growth of the axial slot by ridge propagation. We think this is a promising conceptual framework with which to understand the differences between volcanic and non-volcanic rifted margins.

  10. Quantitative measurement of marginal disintegration of ceramic inlays.

    PubMed

    Hayashi, Mikako; Tsubakimoto, Yuko; Takeshige, Fumio; Ebisu, Shigeyuki

    2004-01-01

    The objectives of this study include establishing a method for quantitative measurement of marginal change in ceramic inlays and clarifying their marginal disintegration in vivo. An accurate CCD optical laser scanner system was used for morphological measurement of the marginal change of ceramic inlays. The accuracy of the CCD measurement was assessed by comparing it with microscopic measurement. Replicas of 15 premolars restored with Class II ceramic inlays at the time of placement and eight years after restoration were used for morphological measurement by means of the CCD laser scanner system. Occlusal surfaces of the restored teeth were scanned and cross-sections of marginal areas were computed with software. Marginal change was defined as the area enclosed by two profiles obtained by superimposing two cross-sections of the same location at two different times and expressing the maximum depth and mean area of the area enclosed. The accuracy of this method of measurement was 4.3 +/- 3.2 microm in distance and 2.0 +/- 0.6% in area. Quantitative marginal changes for the eight-year period were 10 x 10 microm in depth and 50 x 10(3) microm2 in area at the functional cusp area and 7 x 10 microm in depth and 28 x 10(3) microm2 in area at the non-functional cusp area. Marginal disintegration at the functional cusp area was significantly greater than at the non-functional cusp area (Wilcoxon signed-ranks test, p < 0.05). This study constitutes a quantitative measurement of in vivo deterioration in marginal adaptation of ceramic inlays and indicates that occlusal force may accelerate marginal disintegration.

  11. Variational approach to stability boundary for the Taylor-Goldstein equation

    NASA Astrophysics Data System (ADS)

    Hirota, Makoto; Morrison, Philip J.

    2015-11-01

    Linear stability of inviscid stratified shear flow is studied by developing an efficient method for finding neutral (i.e., marginally stable) solutions of the Taylor-Goldstein equation. The classical Miles-Howard criterion states that stratified shear flow is stable if the local Richardson number JR is greater than 1/4 everywhere. In this work, the case of JR > 0 everywhere is considered by assuming strictly monotonic and smooth profiles of the ambient shear flow and density. It is shown that singular neutral modes that are embedded in the continuous spectrum can be found by solving one-parameter families of self-adjoint eigenvalue problems. The unstable ranges of wavenumber are searched for accurately and efficiently by adopting this method in a numerical algorithm. Because the problems are self-adjoint, the variational method can be applied to ascertain the existence of singular neutral modes. For certain shear flow and density profiles, linear stability can be proven by showing the non-existence of a singular neutral mode. New sufficient conditions, extensions of the Rayleigh-Fjortoft stability criterion for unstratified shear flows, are derived in this manner. This work was supported by JSPS Strategic Young Researcher Overseas Visits Program for Accelerating Brain Circulation # 55053270.

  12. Sediment Flux, East Greenland Margin

    DTIC Science & Technology

    1991-09-17

    D.. T 0ATE [3. AEORT TYPE AND ý -2-’S .’:2,E.i 09/17/91 Final Oct. . 1988 - Seot.l. 1991 4. TITLE AND SU.3TITLE S. F*.i1CjG . AU • 12..5 Sediment Flux...and s le ,; its ditribution is unlimited. 13. ABSTRACT (Maximum 2CO words) We investigated sediment flux across an ice-dominated, high latitude...investigated an area off the East Greenland margin where the world’s second largest ice sheet still exists and where information on the extent of glaciation on

  13. Genomic selection in a commercial winter wheat population.

    PubMed

    He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong

    2016-03-01

    Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.

  14. Maximum margin semi-supervised learning with irrelevant data.

    PubMed

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Is the profitability of Canadian tiestall farms associated with their performance on an animal welfare assessment?

    PubMed

    Villettaz Robichaud, M; Rushen, J; de Passillé, A M; Vasseur, E; Haley, D B; Pellerin, D

    2018-03-01

    In order for dairy producers to comply with animal welfare recommendations, financial investments may be required. In Canada, a new dairy animal care assessment program is currently being implemented under the proAction Initiative to determine the extent to which certain aspects of the Code of Practice are being followed and to assess the care and well-being of dairy cattle on farm. The aim of the current study was to evaluate the association between meeting the proAction animal-based and the electric trainer placement criteria and certain aspects of productivity and profitability on tiestall dairy farms. The results of a previous on-farm cow comfort assessment conducted on 100 Canadian tiestall farms were used to simulate the results of a part of the proAction Animal Care assessment on these farms. Each farm's productivity and profitability data were retrieved from the regional dairy herd improvement associations. Univariable and multivariable linear regressions were used to evaluate the associations between meeting these proAction criteria and the farms' average yearly: corrected milk production, somatic cell count (SCC), calving interval, number of breedings/cow, culling rate, prevalence of cows in third or higher lactation, and margins per cow and per kilogram of quota calculated over replacement costs. The association between milk production and the proAction lameness criterion was moderated through an interaction with the milk production genetic index which resulted in an increase in milk production per year with increasing genetic index that was steeper in farms that met the proAction lameness criterion compared with farms that did not. Meeting the proAction body condition score criterion was associated with reduced SCC and meeting the proAction electric trainer placement criterion was associated with SCC through an interaction with the farms' average SCC genetic index. The increase in SCC with increasing SCC genetic index was milder in farms that met this criterion compared with farms that did not. Farms that met the proAction electric trainer placement criterion had 4.6% more cows in their third or greater lactation. These results suggest that some associations exist between the productivity of Canadian tiestall farms and meeting several parameters of the proAction Animal Care assessment. Meeting these criteria is unlikely to impose any economic burden to the dairy industry as a whole. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  17. Detection and recognition of targets by using signal polarization properties

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.

    1999-08-01

    The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.

  18. On the complexity of search for keys in quantum cryptography

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2016-03-01

    The trace distance is used as a security criterion in proofs of security of keys in quantum cryptography. Some authors doubted that this criterion can be reduced to criteria used in classical cryptography. The following question has been answered in this work. Let a quantum cryptography system provide an ɛ-secure key such that ½‖ρ XE - ρ U ⊗ ρ E ‖1 < ɛ, which will be repeatedly used in classical encryption algorithms. To what extent does the ɛ-secure key reduce the number of search steps (guesswork) as compared to the use of ideal keys? A direct relation has been demonstrated between the complexity of the complete consideration of keys, which is one of the main security criteria in classical systems, and the trace distance used in quantum cryptography. Bounds for the minimum and maximum numbers of search steps for the determination of the actual key have been presented.

  19. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    NASA Technical Reports Server (NTRS)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  20. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  1. Wave instabilities in the presence of non vanishing background in nonlinear Schrödinger systems

    PubMed Central

    Trillo, S.; Gongora, J. S. Totero; Fratalocchi, A.

    2014-01-01

    We investigate wave collapse ruled by the generalized nonlinear Schrödinger (NLS) equation in 1+1 dimensions, for localized excitations with non-zero background, establishing through virial identities a new criterion for blow-up. When collapse is arrested, a semiclassical approach allows us to show that the system can favor the formation of dispersive shock waves. The general findings are illustrated with a model of interest to both classical and quantum physics (cubic-quintic NLS equation), demonstrating a radically novel scenario of instability, where solitons identify a marginal condition between blow-up and occurrence of shock waves, triggered by arbitrarily small mass perturbations of different sign. PMID:25468032

  2. Statistical evaluation of the Local Lymph Node Assay.

    PubMed

    Hothorn, Ludwig A; Vohr, Hans-Werner

    2010-04-01

    In the Local Lymph Node Assay measured endpoints for each animal, such as cell proliferation, cell counts and/or lymph node weight should be evaluated separately. The primary criterion for a positive response is when the estimated stimulation index is larger than a specified relative threshold that is endpoint- and strain-specific. When the lower confidence limit for ratio-to-control comparisons is larger than a relevance threshold, a biologically relevant increase can be concluded according to the proof of hazard. Alternatively, when the upper confidence limit for ratio-to-control comparisons is smaller than a tolerable margin, harmlessness can be concluded according to a proof of safety. Copyright 2009 Elsevier Inc. All rights reserved.

  3. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  4. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  5. Differences and similarities in fatigue behaviour and its influences on critical current and residual strength between Ti-Nb and Nb3Al superconducting composite wires

    NASA Astrophysics Data System (ADS)

    Ochiai, Shojiro; Oki, Yuichiro; Sekino, Fumiaki; Ohno, Hiroaki; Hojo, Masaki; Moriai, Hidezumi; Sakai, Shuji; Koganeya, Masanobu; Hayashi, Kazuhiko; Yamada, Yuichi; Ayai, Naoki; Watanabe, Kazuo

    2000-04-01

    The influences of fatigue damage introduced at room temperature on critical current at 4.2 K and residual strength at room temperature of Ti-Nb superconducting composite wire with a low copper ratio (1.04) were studied. The experimental results were compared with those of Nb3 Al composite. The following differences between the composites were found: the fracture surface of the Ti-Nb filaments in the composite varies from a ductile pattern under static loading to a brittle one under cyclic loading, while the Nb3 Al compound always shows a brittle pattern under both loadings; the fracture strength of the Ti-Nb composite is given by the net stress criterion but that of Nb3 Al by the stress intensity factor criterion; in the Ti-Nb composite the critical current Ic decreases with increasing number of stress cycles simultaneously with the residual strength icons/Journals/Common/sigma" ALT="sigma" ALIGN="TOP"/> c ,r , while in the Nb3 Al composite Ic decreases later than icons/Journals/Common/sigma" ALT="sigma" ALIGN="TOP"/> c ,r . On the other hand, both composites have the following similarities: the filaments are fractured due to the propagation of the fatigue crack nucleated in the copper; with increasing number of stress cycles, the damage progresses in the order of stage I (formation of cracks in the clad copper), stage II (stable propagation of the fatigue crack into the inner core) and stage III (overall fracture), among which stage II occurs in the late stage beyond 85 to 90% of the fatigue life; at intermediate maximum stress, many large cracks grow into the core portion at different cross sections but not at high and low maximum stresses; accordingly, the critical current and residual strength of the portion apart from the main crack are low for the intermediate maximum stress but not for low and high maximum stresses.

  6. Digital image analysis supports a nuclear-to-cytoplasmic ratio cutoff value of 0.5 for atypical urothelial cells.

    PubMed

    Hang, Jen-Fan; Charu, Vivek; Zhang, M Lisa; VandenBussche, Christopher J

    2017-09-01

    An elevated nuclear-to-cytoplasmic (N:C) ratio of ≥0.5 is a required criterion for the diagnosis of atypical urothelial cells (AUC) in The Paris System for Reporting Urinary Cytology. To validate the N:C ratio cutoff value and its predictive power for high-grade urothelial carcinoma (HGUC), the authors retrospectively reviewed the urinary tract cytology specimens of 15 cases of AUC with HGUC on follow-up (AUC-HGUC) and 33 cases of AUC without HGUC on follow-up (AUC-N-HGUC). The number of atypical cells in each case was recorded, and each atypical cell was photographed and digitally examined to calculate the nuclear size and N:C ratio. On average, the maximum N:C ratios of atypical cells were significantly different between the AUC-HGUC and AUC-N-HGUC cohorts (0.53 vs 0.43; P =.00009), whereas the maximum nuclear sizes of atypical cells (153.43 μM 2 vs 201.47 μM 2 ; P = .69) and the number of atypical cells per case (10.13 vs 7.88; P = .12) were not found to be significantly different. Receiver operating characteristic analysis demonstrated that the maximum N:C ratio alone had high discriminatory capacity (area under the curve, 79.19%; 95% confidence interval, 64.19%-94.19%). The optimal maximum N:C ratio threshold was 0.486, giving a sensitivity of 73.3% and a specificity of 84.8% for predicting HGUC on follow-up. The identification of AUC with an N:C ratio >0.486 has a high predictive power for HGUC on follow-up in AUC specimens. This justifies using the N:C ratio as a required criterion for the AUC category. Individual laboratories using different cytopreparation methods may require independent validation of the N:C ratio cutoff value. Cancer Cytopathol 2017;125:710-6. © 2017 American Cancer Society. © 2017 American Cancer Society.

  7. [Dependent individuals classification based on the 1999 Disabilities, Impairments and Health Status Survey].

    PubMed

    Albarrán Lozano, Irene; Alonso González, Pablo

    2006-01-01

    In order to move to a system in which formal medical-social care is a priority, in which the institutions play a greater role, it is advisable to analyze the different actual situations of those individuals who are dependent. The objective of this study consists of classifying the dependent Spanish population into different groups, each of their own distinguishing characteristics. The data from the Disabilities, Deficiencies and Health Status Survey (Spanish National Institute of Statistics, 1999) related to the non-institionalized population over 6 years of age. Groups are configured using multivariate analysis techniques in terms of the age, sex, daily living activity-related disabilities, the severity thereof and the number of hours of care per week. Nine statistically heterogeneous profiles were obtained. In the first three, with 341,262 individuals (24.5% of the total dependent population), the individuals in question were under 60 years of age. With the exception of one group, females were predominant (61% males), falling within the 13-36 age range. In the group least affected by lack of self-sufficiency (136,240 individuals), solely 26% of those comprising this group had problems getting around (maximum degree of severity in 7%), regarding care, 25% (maximum degree of severity in 10%) and only 32% need more than 15 hours of care per week. Most thereof (74%) are females within the 82-90 age range. The dependent Spanish population is revealed to be in different situations related to different categories in terms of age, sex, number and severity of the daily living activities affected. The different assessments of the severity related to said categories, employing both the Spanish National Institute of Statistics maximum severity criterion as well as another alternative criterion which implements the degree of severity of all of the daily living activities, not only the maximum, confirm the nine groups found to be different from one another.

  8. Suitability of ponds formed by strip mining in eastern Oklahoma for public water supply, aquatic life, waterfowl habitat, livestock watering, irrigation, and recreation

    USGS Publications Warehouse

    Parkhurst, Renee S.

    1994-01-01

    A study of coal ponds formed by strip mining in eastern Oklahoma included 25 ponds formed by strip mining from the Croweburg, McAlester, and Iron Post coal seams and 6 noncoal-mine ponds in the coal-mining area. Water-quality samples were collected in the spring and summer of 1985 to determine the suitability of the ponds for public water supply, aquatic life, waterfowl habitat, livestock watering, irrigation, and recreation. The rationale for water-quality criteria and the criteria used for each proposed use are discussed. The ponds were grouped by the coal seam mined or as noncoal-mine ponds, and the number of ponds from each group containing water that exceeded a given criterion is noted. Water in many of the ponds can be used for public water supplies if other sources are not available. Water in most of these ponds exceeds one or more secondary standards, but meets all primary standards. Water samples from the epilimnion (shallow strata as determined by temperature) of six ponds exceeded one or more primary standards, which are criteria protective of human health. Water samples from five of eight Iron Post ponds exceeded the selenium criterion. Water samples from all 31 ponds exceeded one or more secondary standards, which are for the protection of human welfare. The criteria most often exceeded were iron, manganese, dissolved solids, and sulfate, which are secondary standards. The criteria for iron and manganese were exceeded more frequently in the noncoal-mine ponds, whereas ponds formed by strip mining were more likely to exceed the criteria for dissolved solids and sulfate. The ponds are marginally suited for aquatic life. Water samples from the epilimnion of 18 ponds exceeded criteria protective of aquatic life. The criteria for mercury and iron were exceeded most often. Little difference was detected between mine ponds and noncoal-mine ponds. Dissolved oxygen concentrations in the hypolimnion (deepest strata) of all the ponds were less than the minimum criterion during the summer. This decreases available fish habitat and affects the type and number of benthic invertebrates. The ponds are generally well suited for use by wintering and migrating waterfowl. Thirteen of the ponds contained water that exceeded the pH, alkalinity, and selenium criteria. The noncoal-mine ponds had the largest percentage of ponds exceeding pH and alkalinity criteria. Water samples from five of eight Iron Post ponds exceeded the selenium criterion. All ponds are generally unsuitable as waterfowl habitat during the summer because of high temperatures and low dissolved oxygen. Most of the ponds are well suited for livestock watering. Water samples from the epilimnion of 29 ponds met all chemical and physical criteria. Water samples from five ponds exceeded the criteria in the hypolimnion. Mine ponds exceeded chemical and physical criteria more often than noncoal-mine ponds. All the ponds contained phytoplankton species potentially toxic to livestock. Water from most of the ponds is marginally suitable for irrigation of sensitive crops, but is more suitable for irrigation of semitolerant and tolerant crops. Most major cash crops grown in eastern Oklahoma are semitolerant and tolerant crops. Water from the epilimnion of 14 ponds was suitable for irrigation under almost all conditions. Water from the epilimnion of 20 ponds was suitable for irrigation of semitolerant crops, and water from the epilimnion of 25 ponds is suitable for irrigation of tolerant crops. The dissolved solids criterion was exceeded the most often. Most of the ponds would not be suitable for swimming. The pH criterion was exceeded in 17 ponds and turbidity restricts visibility needed for diving in 23 ponds. Little difference was detected between mine ponds and noncoal-mine ponds. Many of the ponds formed by strip mining have steep banks that may be dangerous to swimmers.

  9. A search for evidence of solar rotation in Super-Kamiokande solar neutrino dataset

    NASA Astrophysics Data System (ADS)

    Desai, Shantanu; Liu, Dawei W.

    2016-09-01

    We apply the generalized Lomb-Scargle (LS) periodogram, proposed by Zechmeister and Kurster, to the solar neutrino data from Super-Kamiokande (Super-K) using data from its first five years. For each peak in the LS periodogram, we evaluate the statistical significance in two different ways. The first method involves calculating the False Alarm Probability (FAP) using non-parametric bootstrap resampling, and the second method is by calculating the difference in Bayesian Information Criterion (BIC) between the null hypothesis, viz. the data contains only noise, compared to the hypothesis that the data contains a peak at a given frequency. Using these methods, we scan the frequency range between 7-14 cycles per year to look for any peaks caused by solar rotation, since this is the proposed explanation for the statistically significant peaks found by Sturrock and collaborators in the Super-K dataset. From our analysis, we do confirm that similar to Sturrock et al, the maximum peak occurs at a frequency of 9.42/year, corresponding to a period of 38.75 days. The FAP for this peak is about 1.5% and the difference in BIC (between pure white noise and this peak) is about 4.8. We note that the significance depends on the frequency band used to search for peaks and hence it is important to use a search band appropriate for solar rotation. However, The significance of this peak based on the value of BIC is marginal and more data is needed to confirm if the peak persists and is real.

  10. Salt geometry influence on present-day stress orientations in the Nile Delta: Insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Eckert, Andreas; Zhang, Weicheng

    2016-02-01

    The offshore Nile Delta displays sharply contrasting orientations of the maximum horizontal stress, SH, in regions above Messinian evaporites (suprasalt) and regions below Messinian evaporites (subsalt). Published stress orientation data predominantly show margin-normal suprasalt SH orientations but a margin-parallel subsalt SH orientation. While these data sets provide the first major evidence that evaporite sequences can act as mechanical detachment horizons, the cause for the stress orientation contrast remains unclear. In this study, 3D finite element analysis is used to investigate the causes for stress re-orientation based on two different hypotheses. The modeling study evaluates the influence of different likely salt geometries and whether stress reorientations are the result of basal drag forces induced by gravitational gliding or whether they represent localized variations due to mechanical property contrasts. The modeling results show that when salt is present as a continuous layer, gravitational gliding occurs and basal drag forces induced in the suprasalt layers result in the margin-normal principal stress becoming the maximum horizontal stress. With the margin-normal stress increase being confined to the suprasalt layers, the salt acts as a mechanical detachment horizon, resulting in different SH orientations in the suprasalt compared to the subsalt layers. When salt is present as isolated bodies localized stress variations occur due to the mechanical property contrasts imposed by the salt, also resulting in different SH orientations in the suprasalt compared to the subsalt layers. The modeling results provide additional quantitative evidence to confirm the role of evaporite sequences as mechanical detachment horizons.

  11. Challenges in process marginality for advanced technology nodes and tackling its contributors

    NASA Astrophysics Data System (ADS)

    Narayana Samy, Aravind; Schiwon, Roberto; Seltmann, Rolf; Kahlenberg, Frank; Katakamsetty, Ushasree

    2013-10-01

    Process margin is getting critical in the present node shrinkage scenario due to the physical limits reached (Rayleigh's criterion) using ArF lithography tools. K1 is used to its best for better resolution and to enhance the process margin (28nm metal patterning k1=0.31). In this paper, we would like to give an overview of various contributors in the advanced technology nodes which limit the process margins and how the challenges have been tackled in a modern foundry model. Advanced OPC algorithms are used to make the design content at the mask optimum for patterning. However, as we work at the physical limit, critical features (Hot-spots) are very susceptible to litho process variations. Furthermore, etch can have a significant impact as well. Pattern that still looks healthy at litho can fail due to etch interactions. This makes the traditional 2D contour output from ORC tools not able to predict accurately all defects and hence not able to fully correct it in the early mask tapeout phase. The above makes a huge difference in the fast ramp-up and high yield in a competitive foundry market. We will explain in this paper how the early introduction of 3D resist model based simulation of resist profiles (resist top-loss, bottom bridging, top-rounding, etc.,) helped in our prediction and correction of hot-spots in the early 28nm process development phase. The paper also discusses about the other overall process window reduction contributors due to mask 3D effects, wafer topography (focus shifts/variations) and how this has been addressed with different simulation efforts in a fast and timely manner.

  12. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  13. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  14. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  15. A Very Stable High Throughput Taylor Cone-jet in Electrohydrodynamics

    PubMed Central

    Morad, M. R.; Rajabi, A.; Razavi, M.; Sereshkeh, S. R. Pejman

    2016-01-01

    A stable capillary liquid jet formed by an electric field is an important physical phenomenon for formation of controllable small droplets, power generation and chemical reactions, printing and patterning, and chemical-biological investigations. In electrohydrodynamics, the well-known Taylor cone-jet has a stability margin within a certain range of the liquid flow rate (Q) and the applied voltage (V). Here, we introduce a simple mechanism to greatly extend the Taylor cone-jet stability margin and produce a very high throughput. For an ethanol cone-jet emitting from a simple nozzle, the stability margin is obtained within 1 kV for low flow rates, decaying with flow rate up to 2 ml/h. By installing a hemispherical cap above the nozzle, we demonstrate that the stability margin could increase to 5 kV for low flow rates, decaying to zero for a maximum flow rate of 65 ml/h. The governing borders of stability margins are discussed and obtained for three other liquids: methanol, 1-propanol and 1-butanol. For a gravity-directed nozzle, the produced cone-jet is more stable against perturbations and the axis of the spray remains in the same direction through the whole stability margin, unlike the cone-jet of conventional simple nozzles. PMID:27917956

  16. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    PubMed

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  17. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling

    PubMed Central

    Barnhart, Paul R.; Gillam, Erin H.

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936

  18. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  19. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  20. Anthropometry as a predictor of bench press performance done at different loads.

    PubMed

    Caruso, John F; Taylor, Skyler T; Lutz, Brant M; Olson, Nathan M; Mason, Melissa L; Borgsmiller, Jake A; Riner, Rebekah D

    2012-09-01

    The purpose of our study was to examine the ability of anthropometric variables (body mass, total arm length, biacromial width) to predict bench press performance at both maximal and submaximal loads. Our methods required 36 men to visit our laboratory and submit to anthropometric measurements, followed by lifting as much weight as possible in good form one time (1 repetition maximum, 1RM) in the exercise. They made 3 more visits in which they performed 4 sets of bench presses to volitional failure at 1 of 3 (40, 55, or 75% 1RM) submaximal loads. An accelerometer (Myotest Inc., Royal Oak MI) measured peak force, velocity, and power after each submaximal load set. With stepwise multivariate regression, our 3 anthropometric variables attempted to explain significant amounts of variance for 13 bench press performance indices. For criterion measures that reached significance, separate Pearson product moment correlation coefficients further assessed if the strength of association each anthropometric variable had with the criterion was also significant. Our analyses showed that anthropometry explained significant amounts (p < 0.05) of variance for 8 criterion measures. It was concluded that body mass had strong univariate correlations with 1RM and force-related measures, total arm length was moderately associated with 1RM and criterion variables at the lightest load, whereas biacromial width had an inverse relationship with the peak number of repetitions performed per set at the 2 lighter loads. Practical applications suggest results may help coaches and practitioners identify anthropometric features that may best predict various measures of bench press prowess in athletes.

  1. Optimizing LED lighting for space plant growth unit: Joint effects of photon flux density, red to white ratios and intermittent light pulses.

    PubMed

    Avercheva, O V; Berkovich, Yu A; Konovalova, I O; Radchenko, S G; Lapach, S N; Bassarskaya, E M; Kochetova, G V; Zhigalova, T V; Yakovleva, O S; Tarakanov, I G

    2016-11-01

    The aim of this work were to choose a quantitative optimality criterion for estimating the quality of plant LED lighting regimes inside space greenhouses and to construct regression models of crop productivity and the optimality criterion depending on the level of photosynthetic photon flux density (PPFD), the proportion of the red component in the light spectrum and the duration of the duty cycle (Chinese cabbage Brassica сhinensis L. as an example). The properties of the obtained models were described in the context of predicting crop dry weight and the optimality criterion behavior when varying plant lighting parameters. Results of the fractional 3-factor experiment demonstrated the share of the PPFD level participation in the crop dry weight accumulation was 84.4% at almost any combination of other lighting parameters, but when PPFD value increased up to 500µmol m -2 s -1 the pulse light and supplemental light from red LEDs could additionally increase crop productivity. Analysis of the optimality criterion response to variation of lighting parameters showed that the maximum coordinates were the following: PPFD = 500µmol m -2 s -1 , about 70%-proportion of the red component of the light spectrum (PPFD LEDred /PPFD LEDwhite = 1.5) and the duty cycle with a period of 501µs. Thus, LED crop lighting with these parameters was optimal for achieving high crop productivity and for efficient use of energy in the given range of lighting parameter values. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  2. An In Vitro Comparison of the Marginal Adaptation Accuracy of CAD/CAM Restorations Using Different Impression Systems.

    PubMed

    Shembesh, Marwa; Ali, Ala; Finkelman, Matthew; Weber, Hans-Peter; Zandparsa, Roya

    2017-10-01

    To compare the marginal adaptation of 3-unit zirconia fixed dental prostheses (FDPs) obtained from intraoral digital scanners (Lava True Definition, Cadent iTero), scanning of a conventional silicone impression, and the resulting master cast with an extraoral scanner (3Shape lab scanner). One reference model was fabricated from intact, non-carious, unrestored human mandibular left first premolar and first molar teeth (teeth #19 and 21), prepared for a three-unit all-ceramic FDP. Impressions of the reference model were obtained using four impression systems (n = 10), group 1 (PVS impression scan), group 2 (stone cast scan), group 3 (Cadent iTero), and group 4 (Lava True Defintion). Then the three-unit zirconia FDPs were milled. Marginal adaptation of the zirconia FDPs was evaluated using an optical comparator at four points on each abutment. The mean (SD) was reported for each group. One-way ANOVA was used to assess the statistical significance of the results, with post hoc tests conducted via Tukey's HSD. p < 0.05 was considered statistically significant. All analyses were done using SPSS 22.0. The mean (SD) marginal gaps for the recorded data from highest to lowest were silicone impression scans 81.4 μm (6.8), Cadent iTero scan 62.4 μm (5.0), master cast scan 50.2 μm (6.1), and Lava True definition scan 26.6 μm (4.7). One-way ANOVA revealed significant differences (p < 0.001) in the mean marginal gap among the groups. The Tukey's HSD tests demonstrated that the differences between all groups (silicone impression scan, master cast scan, Lava True definition scan, iTero Cadent scan) were statistically significant (all p < 0.001). On the basis of the criterion of 120 μm as the limit of clinical acceptance, all marginal discrepancy values of all groups were clinically acceptable. Within the confines of this in vitro study, it can be concluded that the marginal gap of all impression techniques was within the acceptable clinical limit (120 μm). Group 4 (Lava True Defintion) showed the lowest average gap among all groups followed by group 2 (stone cast scan), group 3 (Cadent iTero), and group 1 (PVS impression scan); these differences were statistically significant. © 2016 by the American College of Prosthodontists.

  3. Maximum number of live births per donor in artificial insemination.

    PubMed

    Wang, Charlotte; Tsai, Miao-Yu; Lee, Mei-Hsien; Huang, Su-Yun; Kao, Chen-Hung; Ho, Hong-Nerng; Hsiao, Chuhsing Kate

    2007-05-01

    The maximal number of live births (k) per donor was usually determined by cultural and social perspective. It was rarely decided on the basis of scientific evidence or discussed from mathematical or probabilistic viewpoint. To recommend a value for k, we propose three criteria to evaluate its impact on consanguinity and disease incidence due to artificial insemination by donor (AID). The first approach considers the optimization of k under the criterion of fixed tolerable number of consanguineous mating due to AID. The second approach optimizes k under fixed allowable average coefficient of inbreeding. This approach is particularly helpful when assessing the impact on the public, is of interest. The third criterion considers specific inheritance diseases. This approach is useful when evaluating the individual's risk of genetic diseases. When different diseases are considered, this criterion can be easily adopted. All these derivations are based on the assumption of shortage of gamete donors due to great demand and insufficient supply. Our results indicate that strong degree of assortative mating, small population size and insufficient supply in gamete donors will lead to greater risk of consanguinity. Recommendations under other settings are also tabulated for reference. A web site for calculating the limit for live births per donor is available.

  4. Hygrothermal Performance of West Coast Wood Deck Roofing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallin, Simon B.; Kehrer, Manfred; Desjarlais, Andre Omer

    2014-02-01

    Simulations of roofing assemblies are necessary in order to understand and adequately predict actual the hygrothermal performance. At the request of GAF, simulations have been setup to verify the difference in performance between white and black roofing membrane colors in relation to critical moisture accumulation for traditional low slope wood deck roofing systems typically deployed in various western U.S. Climate Zones. The performance of these roof assemblies has been simulated in the hygrothermal calculation tool of WUFI, from which the result was evaluated based on a defined criterion for moisture safety. The criterion was defined as the maximum accepted watermore » content for wood materials and the highest acceptable moisture accumulation rate in relation to the risk of rot. Based on the criterion, the roof assemblies were certified as being either safe, risky or assumed to fail. The roof assemblies were simulated in different western climates, with varying insulation thicknesses, two different types of wooden decking, applied with varying interior moisture load and with either a high or low solar absorptivity at the roof surface (black or white surface color). The results show that the performance of the studied roof assemblies differs with regard to all of the varying parameters, especially the climate and the indoor moisture load.« less

  5. Two-dimensional mapping of needle visibility with linear and curved array for ultrasound-guided interventional procedure

    NASA Astrophysics Data System (ADS)

    Susanti, Hesty; Suprijanto, Kurniadi, Deddy

    2018-02-01

    Needle visibility in ultrasound-guided technique has been a crucial factor for successful interventional procedure. It has been affected by several factors, i.e. puncture depth, insertion angle, needle size and material, and imaging technology. The influences of those factors made the needle not always well visible. 20 G needles of 15 cm length (Nano Line, facet) were inserted into water bath with variation of insertion angles and depths. Ultrasound measurements are performed with BK-Medical Flex Focus 800 using 12 MHz linear array and 5 MHz curved array in Ultrasound Guided Regional Anesthesia mode. We propose 3 criteria to evaluate needle visibility, i.e. maximum intensity, mean intensity, and the ratio between minimum and maximum intensity. Those criteria were then depicted into representative maps for practical purpose. The best criterion candidate for representing the needle visibility was criterion 1. Generally, the appearance pattern of the needle from this criterion was relatively consistent, i.e. for linear array, it was relatively poor visibility in the middle part of the shaft, while for curved array, it is relatively better visible toward the end of the shaft. With further investigations, for example with the use of tissue-mimicking phantom, the representative maps can be built for future practical purpose, i.e. as a tool for clinicians to ensure better needle placement in clinical application. It will help them to avoid the "dead" area where the needle is not well visible, so it can reduce the risks of vital structures traversing and the number of required insertion, resulting in less patient morbidity. Those simple criteria and representative maps can be utilized to evaluate general visibility patterns of the needle in vast range of needle types and sizes in different insertion media. This information is also important as an early investigation for future research of needle visibility improvement, i.e. the development of beamforming strategies and ultrasound enhanced (echogenic) needle.

  6. Water Quality Criteria for Colored Smokes: 1,4-Diamino-2,3- Dihydroanthraquinone

    DTIC Science & Technology

    1988-01-01

    overprotection or underprotection. It is not enough that a criterion be the best estimate obtainable using available data; it is equally important that a...acceptable BAF can be used in place of a BCF, 3. If a maximum permissible tissue concentration is available for a substance (e.g, parent material or... parent material plus metabolite), the tissue concentration used in BCF calculations should be for the same substance, Otherwise the tissue concentration

  7. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    NASA Astrophysics Data System (ADS)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  8. Magnetic states of linear defects in graphene monolayers: Effects of strain and interaction

    NASA Astrophysics Data System (ADS)

    Alexandre, Simone S.; Nunes, R. W.

    2017-08-01

    The combined effects of defect-defect interaction and strains of up to 10% on the onset of magnetic states in the quasi-one-dimensional electronic states generated by the so-called 558 linear defect in graphene monolayers are investigated by means of ab initio calculations. Results are analyzed on the basis of the heuristics of the Stoner criterion. We find that conditions for the emergence of magnetic states on the 558 defect can be tuned by uniaxial tensile parallel strains (along the defect direction) as well as by uniaxial compressive perpendicular strains, at both limits of isolated and interacting 558 defects. Parallel tensile strains and perpendicular compressive strains are shown to give rise to two cooperative effects that favor the emergence of itinerant magnetism on the 558 defect in graphene: enhancement of the density of states (DOS) of the resonant defect states in the region of the Fermi level and tuning of the Fermi level to the maximum of the related DOS peak. On the other hand, parallel compressive strains and perpendicular tensile strains are shown to be detrimental to the development of magnetic states in the 558 defect, because in these cases the Fermi level is found to shift away from the maximum of the DOS of the defect states. Effects of isotropic and unisotropic biaxial strains are also analyzed in terms of the conditions encoded in the Stoner criterion.

  9. Investigation of acceleration characteristics of a single-spool turbojet engine

    NASA Technical Reports Server (NTRS)

    Oppenheimer, Frank L; Pack, George J

    1953-01-01

    Operation of a single-spool turbojet engine with constant exhaust-nozzle area was investigated at one flight condition. Data were obtained by subjecting the engine to approximate-step changes in fuel flow, and the information necessary to show the relations of acceleration to the sensed engine variables was obtained. These data show that maximum acceleration occurred prior to stall and surge. In the low end of the engine-speed range the margin was appreciable; in the high-speed end the margin was smaller but had not been completely defined by these data. Data involving acceleration as a function of speed, fuel flow, turbine-discharge temperature, compressor-discharge pressure, and thrust have been presented and an effort has been made to show how a basic control system could be improved by addition of an override in which the acceleration characteristic is used not only to prevent the engine from entering the surge region but also to obtain acceleration along the maximum acceleration line during throttle bursts.

  10. Extreme warming, photic zone euxinia and sea level rise during the Paleocene/Eocene Thermal Maximum on the Gulf of Mexico Coastal Plain; connecting marginal marine biotic signals, nutrient cycling and ocean deoxygenation

    NASA Astrophysics Data System (ADS)

    Sluijs, A.; van Roij, L.; Harrington, G. J.; Schouten, S.; Sessa, J. A.; LeVay, L. J.; Reichart, G.-J.; Slomp, C. P.

    2013-12-01

    The Paleocene/Eocene Thermal Maximum (PETM, ~56 Ma) was a ~200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean-atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen and nutrient cycling remain largely unclear. We identify the PETM in a sediment core from the US margin of the Gulf of Mexico. Biomarker-based paleotemperature proxies (MBT/CBT and TEX86) indicate that continental air and sea surface temperatures warmed from 27-29 °C to ~35 °C, although variations in the relative abundances of terrestrial and marine biomarkers may have influenced the record. Vegetation changes as recorded from pollen assemblages supports profound warming. Lithology, relative abundances of terrestrial vs. marine palynomorphs as well as dinoflagellate cyst and biomarker assemblages indicate sea level rise during the PETM, consistent with previously recognized eustatic rise. The recognition of a maximum flooding surface during the PETM changes regional sequence stratigraphic interpretations, which allows us to exclude the previously posed hypothesis that a nearby fossil found in PETM-deposits represents the first North American primate. Within the PETM we record the biomarker isorenieratane, diagnostic of euxinic photic zone conditions. A global data compilation indicates that deoxygenation occurred in large regions of the global ocean in response to warming, hydrological change, and carbon cycle feedbacks, particularly along continental margins, analogous to modern trends. Seafloor deoxygenation and widespread anoxia likely caused phosphorus regeneration from suboxic and anoxic sediments. We argue that this fuelled shelf eutrophication, as widely recorded from microfossil studies, increasing organic carbon burial along continental margins as a negative feedback to carbon input and global warming. If properly quantified with future work, the PETM offers the opportunity to assess the biogeochemical effects of enhanced phosphorus regeneration, as well as the time-scales on which this feedback operates in view of modern and future ocean deoxygenation.

  11. Using multi-objective robust decision making to support seasonal water management in the Chao Phraya River basin, Thailand

    NASA Astrophysics Data System (ADS)

    Riegels, Niels; Jessen, Oluf; Madsen, Henrik

    2016-04-01

    A multi-objective robust decision making approach is demonstrated that supports seasonal water management in the Chao Phraya River basin in Thailand. The approach uses multi-objective optimization to identify a Pareto-optimal set of management alternatives. Ensemble simulation is used to evaluate how each member of the Pareto set performs under a range of uncertain future conditions, and a robustness criterion is used to select a preferred alternative. Data mining tools are then used to identify ranges of uncertain factor values that lead to unacceptable performance for the preferred alternative. The approach is compared to a multi-criteria scenario analysis approach to estimate whether the introduction of additional complexity has the potential to improve decision making. Dry season irrigation in Thailand is managed through non-binding recommendations about the maximum extent of rice cultivation along with incentives for less water-intensive crops. Management authorities lack authority to prevent river withdrawals for irrigation when rice cultivation exceeds recommendations. In practice, this means that water must be provided to irrigate the actual planted area because of downstream municipal water supply requirements and water quality constraints. This results in dry season reservoir withdrawals that exceed planned withdrawals, reducing carryover storage to hedge against insufficient wet season runoff. The dry season planning problem in Thailand can therefore be framed in terms of decisions, objectives, constraints, and uncertainties. Decisions include recommendations about the maximum extent of rice cultivation and incentives for growing less water-intensive crops. Objectives are to maximize benefits to farmers, minimize the risk of inadequate carryover storage, and minimize incentives. Constraints include downstream municipal demands and water quality requirements. Uncertainties include the actual extent of rice cultivation, dry season precipitation, and precipitation in the following wet season. The multi-objective robust decision making approach is implemented as follows. First, three baseline simulation models are developed, including a crop water demand model, a river basin simulation model, and model of the impact of incentives on cropping patterns. The crop water demand model estimates irrigation water demands; the river basin simulation model estimates reservoir drawdown required to meet demands given forecasts of precipitation, evaporation, and runoff; the model of incentive impacts estimates the cost of incentives as function of marginal changes in rice yields. Optimization is used to find a set of non-dominated alternatives as a function of rice area and incentive decisions. An ensemble of uncertain model inputs is generated to represent uncertain hydrological and crop area forecasts. An ensemble of indicator values is then generated for each of the decision objectives: farmer benefits, end-of-wet-season reservoir storage, and the cost of incentives. A single alternative is selected from the Pareto set using a robustness criterion. Threshold values are defined for each of the objectives to identify ensemble members for which objective values are unacceptable, and the PRIM data mining algorithm is then used to identify input values associated with unacceptable model outcomes.

  12. Performance of an airborne imaging 92/183 GHz radiometer during the Bering Sea Marginal Ice Zone Experiment (MIZEX-WEST)

    NASA Technical Reports Server (NTRS)

    Gagliano, J. A.; Mcsheehy, J. J.; Cavalieri, D. J.

    1983-01-01

    An airborne imaging 92/183 GHz radiometer was recently flown onboard NASA's Convair 990 research aircraft during the February 1983 Bering Sea Marginal Ice Zone Experiment (MIZEX-WEST). The 92 GHz portion of the radiometer was used to gather ice signature data and to generate real-time millimeter wave images of the marginal ice zone. Dry atmospheric conditions in the Arctic resulted in good surface ice signature data for the 183 GHz double sideband (DSB) channel situated + or - 8.75 GHz away from the water vapor absorption line. The radiometer's beam scanner imaged the marginal ice zone over a + or - 45 degrees swath angle about the aircraft nadir position. The aircraft altitude was 30,000 feet (9.20 km) maximum and 3,000 feet (0.92 km) minimum during the various data runs. Calculations of the minimum detectable target (ice) size for the radiometer as a function of aircraft altitude were performed. In addition, the change in the atmospheric attenuation at 92 GHz under varying weather conditions was incorporated into the target size calculations. A radiometric image of surface ice at 92 GHz in the marginal ice zone is included.

  13. Thermal margin protection system for a nuclear reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musick, C.R.

    1974-02-12

    A thermal margin protection system for a nuclear reactor is described where the coolant flow flow trip point and the calculated thermal margin trip point are switched simultaneously and the thermal limit locus is made more restrictive as the allowable flow rate is decreased. The invention is characterized by calculation of the thermal limit Locus in response to applied signals which accurately represent reactor cold leg temperature and core power; cold leg temperature being corrected for stratification before being utilized and reactor power signals commensurate with power as a function of measured neutron flux and thermal energy added to themore » coolant being auctioneered to select the more conservative measure of power. The invention further comprises the compensation of the selected core power signal for the effects of core radial peaking factor under maximum coolant flow conditions. (Official Oazette)« less

  14. Three-year clinical performance of cast gold vs ceramic partial crowns.

    PubMed

    Federlin, M; Wagner, J; Männer, T; Hiller, K-A; Schmalz, G

    2007-12-01

    Cast gold partial crowns (CGPC) and partial ceramic crowns (PCC) are both accepted for restoring posterior teeth with extended lesions today. However, as esthetics in dentistry becomes increasingly important, CGPC are being progressively replaced by PCC. The aim of the present prospective split-mouth study was the comparison of the clinical performance of PCC and CGPC after 3 years of clinical service. Twenty-eight patients (11 men and 17 women) participated in the 3-year recall with a total of 56 restorations. In each patient, one CGPC (Degulor C) and one PCC (Vita Mark II ceramic/Cerec III) had been inserted at baseline. CGPC were placed using a zinc phosphate cement (Harvard); PCC were adhesively luted (Variolink II/Excite). All restorations were clinically assessed using modified US Public Health Service (USPHS) criteria at baseline, 1 year, 2 years, and 3 years after insertion. Twenty-eight CGPC and 14 PCC were placed in molars, and 14 PCC were placed in premolars. Early data were reported previously under the same study design. After 3 years, the evaluation according to USPHS criteria revealed no statistically significant differences between both types of restorations with the exception of marginal adaptation and marginal discoloration: A statistically significant difference within the PCC group (baseline/3 years) was determined for the criterion marginal adaptation. For the 3-year recall period, overall failure was 0% for CGPC and 6.9% for PCC. At 3 years, PCC meet American Dental Association Acceptance Guidelines criteria for tooth-colored restorative materials for posterior teeth.

  15. Liver transplantation utilizing old donor organs: a German single-center experience.

    PubMed

    Rauchfuss, F; Voigt, R; Dittmar, Y; Heise, M; Settmacher, U

    2010-01-01

    Due to the current profound lack of suitable donor organs, transplant centers are increasingly forced to accept so-called marginal organs. One criterion for marginal donors is the donor age >65 years. We have presented herein the impact of higher donor age on graft and patient survival. Since 2004, 230 liver transplantations have been performed at our center, including 54 donor organs (23.5%) from individuals >65 years of age. We performed a retrospective analysis of recipient and graft survivals. The overall 1-year mortality was 22.2% (12/54) among recipients of organs from older donors versus 19.5% among recipients whose donors were <65 years. When donor organs were grouped according to age, the 1-year mortality in patients receiving organs from donors aged 65-69 years was 30% (6/20); 70-74 years, 29.4% (5/17); and donors >75 years, 5.9% (1/17). There was no significant correlation between mortality rate and the number of additional criteria of a marginal donor organ. The current lack of donor organs forces transplant centers to accept organs from older individuals; increasingly older patients are being recruited for the donor pool. Our results showed that older organs may be transplanted with acceptable outcomes. This observation was consistent with data from the current literature. It should be emphasized, however, that caution is advised when considering the acceptance of older organs for patients with hepatitis C-related cirrhosis.

  16. Global surface-based cloud observation for ISCCP

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Visual observations of cloud cover are hindered at night due to inadequate illumination of the clouds. This usually leads to an underestimation of the average cloud cover at night, especially for the amounts of middle and high clouds, in climatologies on surface observations. The diurnal cycles of cloud amounts, if based on all the surface observations, are therefore in error, but they can be obtained more accurately if the nighttime observations are screened to select those made under sufficient moonlight. Ten years of nighttime weather observations from the northern hemisphere in December were classified according to the illuminance of moonlight or twilight on the cloud tops, and a threshold level of illuminance was determined, above which the clouds are apparently detected adequately. This threshold corresponds to light from a full moon at an elevation angle of 6 degrees or from a partial moon at higher elevation, or twilight from the sun less than 9 degrees below the horizon. It permits the use of about 38% of the observations made with the sun below the horizon. The computed diurnal cycles of total cloud cover are altered considerably when this moonlight criterion is imposed. Maximum cloud cover over much of the ocean is now found to be at night or in the morning, whereas computations obtained without benefit of the moonlight criterion, as in our published atlases, showed the time of maximum to be noon or early afternoon in many regions. Cloud cover is greater at night than during the day over the open oceans far from the continents, particularly in summer. However, near noon maxima are still evident in the coastal regions, so that the global annual average oceanic cloud cover is still slightly greater during the day than at night, by 0.3%. Over land, where daytime maxima are still obtained but with reduced amplitude, average cloud cover is 3.3% greater during the daytime. The diurnal cycles of total cloud cover we obtain are compared with those of ISCCP for a few regions; they are generally in better agreement if the moonlight criterion is imposed on the surface observations. Using the moonlight criterion, we have analyzed ten years (1982-1991) of surface weather observations over land and ocean, worldwide, for total cloud cover and for the frequency of occurrence of clear sky, fog and precipitation The global average cloud cover (average of day and night) is about 2% higher if we impose the moonlight criterion than if we use all observations. The difference is greater in winter than in summer, because of the fewer hours of darkness in the summer. The amplitude of the annual cycle of total cloud cover over the Arctic Ocean and at the South Pole is diminished by a few percent when the moonlight criterion is imposed. The average cloud cover for 1982-1991 is found to be 55% for northern hemisphere land, 53% for southern hemisphere land, 66% for northern hemisphere ocean, and 70% for southern hemisphere ocean, giving a global average of 64%. The global average for daytime is 64.6% for nighttime 63.3%.

  17. Differential diagnosis of idiopathic granulomatous mastitis and breast cancer using acoustic radiation force impulse imaging.

    PubMed

    Teke, Memik; Teke, Fatma; Alan, Bircan; Türkoğlu, Ahmet; Hamidi, Cihad; Göya, Cemil; Hattapoğlu, Salih; Gumus, Metehan

    2017-01-01

    Differentiation of idiopathic granulomatous mastitis (IGM) from carcinoma with routine imaging methods, such as ultrasonography (US) and mammography, is difficult. Therefore, we evaluated the value of a newly developed noninvasive technique called acoustic radiation force impulse imaging in differentiating IGM versus malignant lesions in the breast. Four hundred and eighty-six patients, who were referred to us with a presumptive diagnosis of a mass, underwent Virtual Touch tissue imaging (VTI; Siemens) and Virtual Touch tissue quantification (VTQ; Siemens) after conventional gray-scale US. US-guided percutaneous needle biopsy was then performed on 276 lesions with clinically and radiologically suspicious features. Malignant lesions (n = 122) and IGM (n = 48) were included in the final study group. There was a statistically significant difference in shear wave velocity marginal and internal values between the IGM and malignant lesions. The median marginal velocity for IGM and malignant lesions was 3.19 m/s (minimum-maximum 2.49-5.82) and 5.05 m/s (minimum-maximum 2.09-8.46), respectively (p < 0.001). The median internal velocity for IGM and malignant lesions was 2.76 m/s (minimum-maximum 1.14-4.12) and 4.79 m/s (minimum-maximum 2.12-8.02), respectively (p < 0.001). The combination of VTI and VTQ as a complement to conventional US provides viscoelastic properties of tissues, and thus has the potential to increase the specificity of US.

  18. 14 CFR 23.1011 - General.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... General. (a) For oil systems and components that have been approved under the engine airworthiness...) Each engine must have an independent oil system that can supply it with an appropriate quantity of oil... the maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure...

  19. 14 CFR 23.1011 - General.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... General. (a) For oil systems and components that have been approved under the engine airworthiness...) Each engine must have an independent oil system that can supply it with an appropriate quantity of oil... the maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure...

  20. 14 CFR 23.1011 - General.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... General. (a) For oil systems and components that have been approved under the engine airworthiness...) Each engine must have an independent oil system that can supply it with an appropriate quantity of oil... the maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure...

  1. 14 CFR 23.1011 - General.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... General. (a) For oil systems and components that have been approved under the engine airworthiness...) Each engine must have an independent oil system that can supply it with an appropriate quantity of oil... the maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure...

  2. 14 CFR 23.1011 - General.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... General. (a) For oil systems and components that have been approved under the engine airworthiness...) Each engine must have an independent oil system that can supply it with an appropriate quantity of oil... the maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure...

  3. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  4. Hydrology and chemistry of groundwater and seasonal ponds in the Atlantic Coastal Plain in Delaware, USA

    USGS Publications Warehouse

    Phillips, P.J.; Shedlock, R.J.

    1993-01-01

    The hydrochemistry of small seasonal ponds was investigated by studying relations between ground-water and surface water in a forested Coastal Plain drainage basin. Observation of changes in the water table in a series of wells equipped with automatic water-level recorders showed that the relation between water-table configuration and basin topography changes seasonally, and particularly in response to spring recharge. Furthermore, in this study area the water table is not a subdued expression of the land surface topography, as is commonly assumed. During the summer and fall months, a water-table trough underlies sandy ridges separating the seasonal ponds, and maximum water-table altitudes prevail in the sediments beneath the dry pond bottoms. As the ponds fill with water during the winter, maximum water-table altitudes shift to the upland-margin zone adjacent to the seasonal ponds. Increases in pond stage are associated with the development of transient water-table mounds at the upland-margin wells during the spring. The importance of small local-flow systems adjacent to the seasonal ponds also is shown by the similarities in the chemistry of the shallow groundwater in the upland margin and water in the seasonal ponds. The upland margin and surface water samples have low pH (generally less than 5.0), and contain large concentrations of dissolved aluminum (generally more than 100 ??g 1-1), and low bicarbonate concentrations (2 mg l4 or less). In contrast, the parts of the surficial aquifer that do not experience transient mounding have higher pH and larger concentrations of bicarbonate. These results suggest that an understanding of the hydrochemistry of seasonally ponded wetlands requires intensive study of the adjacent shallow groundwater-flow system. ?? 1993.

  5. Metal Deposition Along the Peru Margin Since the Last Glacial Maximum: Evidence For Regime Change at \\sim 6ka

    NASA Astrophysics Data System (ADS)

    Tierney, J.; Cleaveland, L.; Herbert, T.; Altabet, M.

    2004-12-01

    The Peru Margin upwelling zone plays a key role in regulating marine biogeochemical cycles, particularly the fate of nitrate. High biological productivity and low oxygen waters fed into the oxygen minimum zone result in intense denitrification in the modern system, the consequences of which are global in nature. It has been very difficult, however, to study the paleoclimatic history of this region because of the poor preservation of carbonate in Peru Margin sediments. Here we present records of trace metal accumulation from two cores located in the heart of the suboxic zone off the central Peru coast. Chronology comes from multiple AMS 14C dates on the alkenone fraction of the sediment, as well as correlation using major features of the \\delta 15N record in each core. ODP Site 1228 provides a high resolution, continuous sediment record from the Recent to about 14ka, while gravity core W7706-41k extends the record to the Last Glacial Maximum. Both cores were sampled at a 100 yr resolution, then analyzed for % N, \\delta 15N, alkenones, and trace metal concentration. Analysis of redox-sensitive metals (Mo and V) alongside metals associated with changes in productivity (Ni and Zn) provides perspective on the evolution of the upwelling system and distinguishes the two major factors controlling the intensity of the oxygen minimum zone. The trace metal record exhibits a notable increase in the intensity and variability of low oxygen waters and productivity beginning around 6ka and extending to the present. Within this most recent 6ka interval, the data suggest fluctuations in oxygenation and productivity occur on 1000 yr timescales. Our core records, therefore, suggest that the Peru Margin upwelling system strengthened significantly during the mid to late Holocene.

  6. Max-margin weight learning for medical knowledge network.

    PubMed

    Jiang, Jingchi; Xie, Jing; Zhao, Chao; Su, Jia; Guan, Yi; Yu, Qiubin

    2018-03-01

    The application of medical knowledge strongly affects the performance of intelligent diagnosis, and method of learning the weights of medical knowledge plays a substantial role in probabilistic graphical models (PGMs). The purpose of this study is to investigate a discriminative weight-learning method based on a medical knowledge network (MKN). We propose a training model called the maximum margin medical knowledge network (M 3 KN), which is strictly derived for calculating the weight of medical knowledge. Using the definition of a reasonable margin, the weight learning can be transformed into a margin optimization problem. To solve the optimization problem, we adopt a sequential minimal optimization (SMO) algorithm and the clique property of a Markov network. Ultimately, M 3 KN not only incorporates the inference ability of PGMs but also deals with high-dimensional logic knowledge. The experimental results indicate that M 3 KN obtains a higher F-measure score than the maximum likelihood learning algorithm of MKN for both Chinese Electronic Medical Records (CEMRs) and Blood Examination Records (BERs). Furthermore, the proposed approach is obviously superior to some classical machine learning algorithms for medical diagnosis. To adequately manifest the importance of domain knowledge, we numerically verify that the diagnostic accuracy of M 3 KN is gradually improved as the number of learned CEMRs increase, which contain important medical knowledge. Our experimental results show that the proposed method performs reliably for learning the weights of medical knowledge. M 3 KN outperforms other existing methods by achieving an F-measure of 0.731 for CEMRs and 0.4538 for BERs. This further illustrates that M 3 KN can facilitate the investigations of intelligent healthcare. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Dynamical resonance shift and unification of resonances in short-pulse laser-cluster interaction

    NASA Astrophysics Data System (ADS)

    Mahalik, S. S.; Kundu, M.

    2018-06-01

    Pronounced maximum absorption of laser light irradiating a rare-gas or metal cluster is widely expected during the linear resonance (LR) when Mie-plasma wavelength λM of electrons equals the laser wavelength λ . On the contrary, by performing molecular dynamics (MD) simulations of an argon cluster irradiated by short 5-fs (FWHM) laser pulses it is revealed that, for a given laser pulse energy and a cluster, at each peak intensity there exists a λ —shifted from the expected λM—that corresponds to a unified dynamical LR at which evolution of the cluster happens through very efficient unification of possible resonances in various stages, including (i) the LR in the initial time of plasma creation, (ii) the LR in the Coulomb expanding phase in the later time, and (iii) anharmonic resonance in the marginally overdense regime for a relatively longer pulse duration, leading to maximum laser absorption accompanied by maximum removal of electrons from cluster and also maximum allowed average charge states for the argon cluster. Increasing the laser intensity, the absorption maxima is found to shift to a higher wavelength in the band of λ ≈(1 -1.5 ) λM than permanently staying at the expected λM. A naive rigid sphere model also corroborates the wavelength shift of the absorption peak as found in MD and unequivocally proves that maximum laser absorption in a cluster happens at a shifted λ in the marginally overdense regime of λ ≈(1 -1.5 ) λM instead of λM of LR. The present study is important for guiding an optimal condition laser-cluster interaction experiment in the short-pulse regime.

  8. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  9. Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data

    DTIC Science & Technology

    2007-04-01

    10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2

  10. Space charge effects for multipactor in coaxial lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorolla, E., E-mail: eden.sorolla@xlim.fr; Sounas, A.; Mattes, M.

    2015-03-15

    Multipactor is a hazardous vacuum discharge produced by secondary electron emission within microwave devices of particle accelerators and telecommunication satellites. This work analyzes the dynamics of the multipactor discharge within a coaxial line for the mono-energetic electron emission model taking into account the space charge effects. The steady-state is predicted by the proposed model and an analytical expression for the maximum number of electrons released by the discharge presented. This could help to link simulations to experiments and define a multipactor onset criterion.

  11. Assessing the formability of metallic sheets by means of localized and diffuse necking models

    NASA Astrophysics Data System (ADS)

    Comşa, Dan-Sorin; Lǎzǎrescu, Lucian; Banabic, Dorel

    2016-10-01

    The main objective of the paper consists in elaborating a unified framework that allows the theoretical assessment of sheet metal formability. Hill's localized necking model and the Extended Maximum Force Criterion proposed by Mattiasson, Sigvant, and Larsson have been selected for this purpose. Both models are thoroughly described together with their solution procedures. A comparison of the theoretical predictions with experimental data referring to the formability of a DP600 steel sheet is also presented by the authors.

  12. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  13. Microleakage of Four Dental Cements in Metal Ceramic Restorations With Open Margins.

    PubMed

    Eftekhar Ashtiani, Reza; Farzaneh, Babak; Azarsina, Mohadese; Aghdashi, Farzad; Dehghani, Nima; Afshari, Aisooda; Mahshid, Minu

    2015-11-01

    Fixed prosthodontics is a routine dental treatment and microleakage is a major cause of its failure. The aim of this study was to assess the marginal microleakage of four cements in metal ceramic restorations with adapted and open margins. Sixty sound human premolars were selected for this experimental study performed in Tehran, Iran and prepared for full-crown restorations. Wax patterns were formed leaving a 300 µm gap on one of the proximal margins. The crowns were cast and the samples were randomly divided into four groups based on the cement used. Copings were cemented using zinc phosphate cement (Fleck), Fuji Plus resin-modified glass ionomer, Panavia F2.0 resin cement, or G-Cem resin cement, according to the manufacturers' instructions. Samples were immersed in 2% methylene blue solution. After 24 hours, dye penetration was assessed under a stereomicroscope and analyzed using the respective software. Data were analyzed using ANOVA, paired t-tests, and Kruskal-Wallis, Wilcoxon, and Mann-Whitney tests. The least microleakage occurred in the Panavia F2.0 group (closed margin, 0.18 mm; open margin, 0.64 mm) and the maximum was observed in the Fleck group (closed margin, 1.92 mm; open margin, 3.32 mm). The Fleck group displayed significantly more microleakage compared to the Fuji Plus and Panavia F2.0 groups (P < 0.001) in both closed and open margins. In open margins, differences in microleakage between the Fuji Plus and G-Cem as well as between the G-Cem and Panavia F2.0 groups were significant (P < 0.001). In closed margins, only the G-Cem group displayed significantly more microleakage as compared to the Panavia F2.0 group (P < 0.05). Paired t-test results showed significantly more microleakage in open margins compared to closed margins, except in the Fuji Plus group (P = 0.539). Fuji Plus cement exhibited better sealing ability in closed and open margins compared to G-Cem and Fleck cements. When using G-Cem and Fleck cements for full metal ceramic restorations, clinicians should try to minimize marginal gaps in order to reduce restoration failure. In situations where there are doubts about perfect marginal adaptation, the use of Fuji Plus cement may be helpful.

  14. 14 CFR 29.1011 - Engines: general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Engines: general. 29.1011 Section 29.1011... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Powerplant Oil System § 29.1011 Engines: general. (a) Each engine... the maximum allowable oil consumption of the engine under the same conditions, plus a suitable margin...

  15. 14 CFR 27.1011 - Engines: General.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Engines: General. 27.1011 Section 27.1011... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Oil System § 27.1011 Engines: General. (a) Each engine must... maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure adequate...

  16. Estimating the Parameters of the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1979-01-01

    For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…

  17. Allowing for Correlations between Correlations in Random-Effects Meta-Analysis of Correlation Matrices

    ERIC Educational Resources Information Center

    Prevost, A. Toby; Mason, Dan; Griffin, Simon; Kinmonth, Ann-Louise; Sutton, Stephen; Spiegelhalter, David

    2007-01-01

    Practical meta-analysis of correlation matrices generally ignores covariances (and hence correlations) between correlation estimates. The authors consider various methods for allowing for covariances, including generalized least squares, maximum marginal likelihood, and Bayesian approaches, illustrated using a 6-dimensional response in a series of…

  18. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  19. 14 CFR 27.1011 - Engines: General.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Engines: General. 27.1011 Section 27.1011... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Oil System § 27.1011 Engines: General. (a) Each engine must... maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure adequate...

  20. 14 CFR 27.1011 - Engines: General.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Engines: General. 27.1011 Section 27.1011... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Oil System § 27.1011 Engines: General. (a) Each engine must... maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure adequate...

  1. 14 CFR 27.1011 - Engines: General.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Engines: General. 27.1011 Section 27.1011... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Oil System § 27.1011 Engines: General. (a) Each engine must... maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure adequate...

  2. 14 CFR 29.1011 - Engines: general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Engines: general. 29.1011 Section 29.1011... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Powerplant Oil System § 29.1011 Engines: general. (a) Each engine... the maximum allowable oil consumption of the engine under the same conditions, plus a suitable margin...

  3. 14 CFR 29.1011 - Engines: general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Engines: general. 29.1011 Section 29.1011... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Powerplant Oil System § 29.1011 Engines: general. (a) Each engine... the maximum allowable oil consumption of the engine under the same conditions, plus a suitable margin...

  4. 14 CFR 27.1011 - Engines: General.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Engines: General. 27.1011 Section 27.1011... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Oil System § 27.1011 Engines: General. (a) Each engine must... maximum oil consumption of the engine under the same conditions, plus a suitable margin to ensure adequate...

  5. 14 CFR 29.1011 - Engines: general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Engines: general. 29.1011 Section 29.1011... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Powerplant Oil System § 29.1011 Engines: general. (a) Each engine... the maximum allowable oil consumption of the engine under the same conditions, plus a suitable margin...

  6. 14 CFR 29.1011 - Engines: general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Engines: general. 29.1011 Section 29.1011... STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Powerplant Oil System § 29.1011 Engines: general. (a) Each engine... the maximum allowable oil consumption of the engine under the same conditions, plus a suitable margin...

  7. The CO2 laser frequency stability measurements

    NASA Technical Reports Server (NTRS)

    Johnson, E. H., Jr.

    1973-01-01

    Carbon dioxide laser frequency stability data are considered for a receiver design that relates to maximum Doppler frequency and its rate of change. Results show that an adequate margin exists in terms of data acquisition, Doppler tracking, and bit error rate as they relate to laser stability and transmitter power.

  8. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  9. Design and analytical study of a rotor airfoil

    NASA Technical Reports Server (NTRS)

    Dadone, L. U.

    1978-01-01

    An airfoil section for use on helicopter rotor blades was defined and analyzed by means of potential flow/boundary layer interaction and viscous transonic flow methods to meet as closely as possible a set of advanced airfoil design objectives. The design efforts showed that the first priority objectives, including selected low speed pitching moment, maximum lift and drag divergence requirements can be met, though marginally. The maximum lift requirement at M = 0.5 and most of the profile drag objectives cannot be met without some compromise of at least one of the higher order priorities.

  10. Surface crack analysis applied to impact damage in a thick graphite-epoxy composite

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.; Harris, C. E.; Morris, D. H.

    1988-01-01

    The residual tensile strength of a thick graphite/epoxy composite with impact damage was predicted using surface crack analysis. The damage was localized to a region directly beneath the impact site and extended only part way through the laminate. The damaged region contained broken fibers, and the locus of breaks in each layer resembled a crack perpendicular to the direction of the fibers. In some cases, the impacts broke fibers without making a visible crater. The impact damage was represented as a semi-elliptical surface crack with length and depth equal to that of the impact damage. The maximum length and depth of the damage were predicted with a stress analysis and a maximum shear stress criterion. The predictions and measurements of strength were in good agreement.

  11. Surface crack analysis applied to impact damage in a thick graphite/epoxy composite

    NASA Technical Reports Server (NTRS)

    Poe, Clarence C., Jr.; Harris, Charles E.; Morris, Don H.

    1990-01-01

    The residual tensile strength of a thick graphite/epoxy composite with impact damage was predicted using surface crack analysis. The damage was localized to a region directly beneath the impact site and extended only part way through the laminate. The damaged region contained broken fibers, and the locus of breaks in each layer resembled a crack perpendicular to the direction of the fibers. In some cases, the impacts broke fibers without making a visible crater. The impact damage was represented as a semi-elliptical surface crack with length and depth equal to that of the impact damage. The maximum length and depth of the damage were predicted with a stress analysis and a maximum shear stress criterion. The predictions and measurements of strength were in good agreement.

  12. A cloud physics investigation utilizing Skylab data

    NASA Technical Reports Server (NTRS)

    Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.

  13. Setting Priorities in Global Child Health Research Investments: Addressing Values of Stakeholders

    PubMed Central

    Kapiriri, Lydia; Tomlinson, Mark; Gibson, Jennifer; Chopra, Mickey; El Arifeen, Shams; Black, Robert E.; Rudan, Igor

    2007-01-01

    Aim To identify main groups of stakeholders in the process of health research priority setting and propose strategies for addressing their systems of values. Methods In three separate exercises that took place between March and June 2006 we interviewed three different groups of stakeholders: 1) members of the global research priority setting network; 2) a diverse group of national-level stakeholders from South Africa; and 3) participants at the conference related to international child health held in Washington, DC, USA. Each of the groups was administered different version of the questionnaire in which they were asked to set weights to criteria (and also minimum required thresholds, where applicable) that were a priori defined as relevant to health research priority setting by the consultants of the Child Health and Nutrition Research initiative (CHNRI). Results At the global level, the wide and diverse group of respondents placed the greatest importance (weight) to the criterion of maximum potential for disease burden reduction, while the most stringent threshold was placed on the criterion of answerability in an ethical way. Among the stakeholders’ representatives attending the international conference, the criterion of deliverability, answerability, and sustainability of health research results was proposed as the most important one. At the national level in South Africa, the greatest weight was placed on the criterion addressing the predicted impact on equity of the proposed health research. Conclusions Involving a large group of stakeholders when setting priorities in health research investments is important because the criteria of relevance to scientists and technical experts, whose knowledge and technical expertise is usually central to the process, may not be appropriate to specific contexts and in accordance with the views and values of those who invest in health research, those who benefit from it, or wider society as a whole. PMID:17948948

  14. Empirical extensions of the lasso penalty to reduce the false discovery rate in high-dimensional Cox regression models.

    PubMed

    Ternès, Nils; Rotolo, Federico; Michiels, Stefan

    2016-07-10

    Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Criterion for excipients screening in the development of nanoemulsion formulation of three anti-inflammatory drugs.

    PubMed

    Shakeel, Faiyaz

    2010-01-01

    The present study was undertaken for screening of different excipients in the development of nanoemulsion formulations of three anti-inflammatory drugs namely ketoprofen, celecoxib (CXB) and meloxicam. Based on solubility profiles of each drug in oil, Triacetin (ketoprofen and CXB) and Labrafil (meloxicam) were selected as the oil phase. Based on maximum solubilization potential of oil in different surfactants, Cremophor-EL (ketoprofen and CXB) and Tween-80 (meloxicam) were selected as surfactants. Based on maximum nanoemulsion region in the pseudoternary phase diagrams, Transcutol-HP was selected as cosurfactant for all three drugs. 1:1 (ketoprofen and CXB) and 2:1 (meloxicam) mass ratio of surfactant to cosurfactant was selected for selection of different nanoemulsions on the basis of maximum nanoemulsion region in the phase diagrams. All selected nanoemulsion formulations were found thermodynamically stable. Results of these studies showed that all excipients were properly optimized for the development of nanoemulsion formulation of ketoprofen, CXB and meloxicam.

  16. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-09-20

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  17. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    PubMed Central

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-01-01

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069

  18. Marginalized zero-inflated negative binomial regression with application to dental caries

    PubMed Central

    Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon

    2015-01-01

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  19. Study on casing treatment and stator matching on multistage fan

    NASA Astrophysics Data System (ADS)

    Wu, Chuangliang; Yuan, Wei; Deng, Zhe

    2017-10-01

    Casing treatments are required for expanding the stall margin of multi-stage high-load turbofans designed with high blade-tip Mach numbers and high leakage flow. In the case of a low mass flow, the casing treatment effectively reduces the blockages caused by the leakage flow and enlarges the stall margin. However, in the case of a high mass flow, the casing treatment affects the overall flow capacity of the fan, the thrust when operating at the high speeds usually required by design-point specifications. Herein, we study a two-stage high-load fan with three-dimensional numerical simulations. We use the simulation results to propose a scheme that enlarges the stall margin of multistage high-load fans without sacrificing the flow capacity when operating with a large mass flow. Furthermore, a circumferential groove casing treatment is used and adjustments are made to the upstream stator angle to match the casing treatment. The stall margin is thus increased to 16.3%, with no reduction in the maximum mass flow rate or the design thrust performance.

  20. Fracture Resistance of Implant Abutments Following Abutment Alterations by Milling the Margins: An In Vitro Study.

    PubMed

    Patankar, Anuya; Kheur, Mohit; Kheur, Supriya; Lakha, Tabrez; Burhanpurwala, Murtuza

    2016-12-01

    This in vitro study evaluated the effect of different levels of preparation of an implant abutment on its fracture resistance. The study evaluated abutments that incorporated a platform switch (Myriad Plus Abutments, Morse Taper Connection) and Standard abutments (BioHorizons Standard Abutment, BioHorizons Inc). Each abutment was connected to an appropriate implant and mounted in a self-cured resin base. Based on the abutment preparation depths, 3 groups were created for each abutment type: as manufactured, abutment prepared 1 mm apical to the original margin, and abutment prepared 1.5 mm to the original margin. All the abutments were prepared in a standardized manner to incorporate a 0.5 mm chamfer margin uniformly. All the abutments were torqued to 30 Ncm on their respective implants. They were then subjected to loading until failure in a universal testing machine. Abutments with no preparation showed the maximum resistance to fracture for both groups. As the preparation depth increased, the fracture resistance decreased. The fracture resistance of implant abutment junction decreases as the preparation depth increases.

  1. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    PubMed

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Study of the influence of the parameters of an experiment on the simulation of pole figures of polycrystalline materials using electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.

    2016-05-15

    A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less

  3. Waste Load Allocation for Conservative Substances to Protect Aquatic Organisms

    NASA Astrophysics Data System (ADS)

    Hutcheson, M. R.

    1992-01-01

    A waste load allocation process is developed to determine the maximum effluent concentration of a conservative substance that will not harm fish and wildlife propagation. If this concentration is not exceeded in the effluent, the acute toxicity criterion will not be violated in the receiving stream, and the chronic criterion will not be exceeded in the zone of passage, defined in many state water quality standards to allow the movement of aquatic organisms past a discharge. Considerable simplification of the concentration equation, which is the heart of any waste load allocation, is achieved because it is based on the concentration in the receiving stream when the concentration gradient on the zone of passage boundary is zero. Consequently, the expression obtained for effluent concentration is independent of source location or stream morphology. Only five independent variables, which are routinely available to regulatory agencies, are required to perform this allocation. It aids in developing permit limits which are protective without being unduly restrictive or requiring large expenditures of money and manpower on field investigations.

  4. Effect of geometric and process variables on the performance of inclined plate settlers in treating aquacultural waste.

    PubMed

    Sarkar, Sudipto; Kamilya, Dibyendu; Mal, B C

    2007-03-01

    Inclined plate settlers are used in treating wastewater due to their low space requirement and high removal rates. The prediction of sedimentation efficiency of these settlers is essential for their performance evaluation. In the present study, the technique of dimensional analysis was applied to predict the sedimentation efficiency of these inclined plate settlers. The effect of various geometric parameters namely, distance between plates (w(p)), plate angle (alpha), length of plate (l(p)), plate roughness (epsilon(p)), number of plates (n(p)) and particle diameter (d(s)) on the dynamic conditions, influencing the sedimentation process was studied. From the study it was established that neither the Reynolds criterion nor the Froude criterion was singularly valid to simulate the sedimentation efficiency (E) for different values of w(p) and flow velocity (v(f)). Considering the prevalent scale effect, simulation equations were developed to predict E at different dynamic conditions. The optimum dynamic condition producing the maximum E is also discussed.

  5. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  6. A new failure mechanism in thin film by collaborative fracture and delamination: Interacting duos of cracks

    NASA Astrophysics Data System (ADS)

    Marthelot, Joël; Bico, José; Melo, Francisco; Roman, Benoît

    2015-11-01

    When a thin film moderately adherent to a substrate is subjected to residual stress, the cooperation between fracture and delamination leads to unusual fracture patterns, such as spirals, alleys of crescents and various types of strips, all characterized by a robust characteristic length scale. We focus on the propagation of a duo of cracks: two fractures in the film connected by a delamination front and progressively detaching a strip. We show experimentally that the system selects an equilibrium width on the order of 25 times the thickness of the coating and independent of both fracture and adhesion energies. We investigate numerically the selection of the width and the condition for propagation by considering Griffith's criterion and the principle of local symmetry. In addition, we propose a simplified model based on the criterion of maximum of energy release rate, which provides insights of the physical mechanisms leading to these regular patterns, and predicts the effect of material properties on the selected width of the detaching strip.

  7. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  8. The Northern Appalachian Anomaly: A modern asthenospheric upwelling

    NASA Astrophysics Data System (ADS)

    Menke, William; Skryzalin, Peter; Levin, Vadim; Harper, Thomas; Darbyshire, Fiona; Dong, Ted

    2016-10-01

    The Northern Appalachian Anomaly (NAA) is an intense, laterally localized (400 km diameter) low-velocity anomaly centered in the asthenosphere beneath southern New England. Its maximum shear velocity contrast, at 200 km depth, is about 10%, and its compressional-to-shear velocity perturbation ratio is about unity, values compatible with it being a modern thermal anomaly. Although centered close to the track of the Great Meteor hot spot, it is not elongated parallel to it and does not crosscut the cratonic margin. In contrast to previous explanations, we argue that the NAA's spatial association with the hot spot track is coincidental and that it is caused by small-scale upwelling associated with an eddy in the asthenospheric flow field at the continental margin. That the NAA is just one of several low-velocity features along the eastern margin of North America suggests that this process may be globally ubiquitous.

  9. The Kinematics of Central American Fore-Arc Motion in Nicaragua: Geodetic, Geophysical and Geologic Study of Magma-Tectonic Interactions

    NASA Astrophysics Data System (ADS)

    La Femina, P. C.; Geirsson, H.; Saballos, A.; Mattioli, G. S.

    2017-12-01

    A long-standing paradigm in plate tectonics is that oblique convergence results in strain partitioning and the formation of migrating fore-arc terranes accommodated on margin-parallel strike-slip faults within or in close proximity to active volcanic arcs (e.g., the Sumatran fault). Some convergent margins, however, are segmented by margin-normal faults and margin-parallel shear is accommodated by motion on these faults and by vertical axis block rotation. Furthermore, geologic and geophysical observations of active and extinct margins where strain partitioning has occurred, indicate the emplacement of magmas within the shear zones or extensional step-overs. Characterizing the mechanism of accommodation is important for understanding short-term (decadal) seismogenesis, and long-term (millions of years) fore-arc migration, and the formation of continental lithosphere. We investigate the geometry and kinematics of Quaternary faulting and magmatism along the Nicaraguan convergent margin, where historical upper crustal earthquakes have been located on margin-normal, strike-slip faults within the fore arc and arc. Using new GPS time series, other geophysical and geologic data, we: 1) determine the location of the maximum gradient in forearc motion; 2) estimate displacement rates on margin-normal faults; and 3) constrain the geometric moment rate for the fault system. We find that: 1) forearc motion is 11 mm a-1; 2) deformation is accommodated within the active volcanic arc; and 3) that margin-normal faults can have rates of 10 mm a-1 in agreement with geologic estimates from paleoseismology. The minimum geometric moment rate for the margin-normal fault system is 2.62x107 m3 yr-1, whereas the geometric moment rate for historical (1931-2006) earthquakes is 1.01x107 m3/yr. The discrepancy between fore-arc migration and historical seismicity may be due to aseismic accommodation of fore-arc motion by magmatic intrusion along north-trending volcanic alignments within the volcanic arc.

  10. Comparing Factor, Class, and Mixture Models of Cannabis Initiation and DSM Cannabis Use Disorder Criteria, Including Craving, in the Brisbane Longitudinal Twin Study

    PubMed Central

    Kubarych, Thomas S.; Kendler, Kenneth S.; Aggen, Steven H.; Estabrook, Ryne; Edwards, Alexis C.; Clark, Shaunna L.; Martin, Nicholas G.; Hickie, Ian B.; Neale, Michael C.; Gillespie, Nathan A.

    2014-01-01

    Accumulating evidence suggests that the Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria for cannabis abuse and dependence are best represented by a single underlying factor. However, it remains possible that models with additional factors, or latent class models or hybrid models, may better explain the data. Using structured interviews, 626 adult male and female twins provided complete data on symptoms of cannabis abuse and dependence, plus a craving criterion. We compared latent factor analysis, latent class analysis, and factor mixture modeling using normal theory marginal maximum likelihood for ordinal data. Our aim was to derive a parsimonious, best-fitting cannabis use disorder (CUD) phenotype based on DSM-IV criteria and determine whether DSM-5 craving loads onto a general factor. When compared with latent class and mixture models, factor models provided a better fit to the data. When conditioned on initiation and cannabis use, the association between criteria for abuse, dependence, withdrawal, and craving were best explained by two correlated latent factors for males and females: a general risk factor to CUD and a factor capturing the symptoms of social and occupational impairment as a consequence of frequent use. Secondary analyses revealed a modest increase in the prevalence of DSM-5 CUD compared with DSM-IV cannabis abuse or dependence. It is concluded that, in addition to a general factor with loadings on cannabis use and symptoms of abuse, dependence, withdrawal, and craving, a second clinically relevant factor defined by features of social and occupational impairment was also found for frequent cannabis use. PMID:24588857

  11. Generalized Bohm’s criterion and negative anode voltage fall in electric discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Londer, Ya. I.; Ul’yanov, K. N., E-mail: kulyanov@vei.ru

    2013-10-15

    The value of the voltage fall across the anode sheath is found as a function of the current density. Analytic solutions are obtained in a wide range of the ratio of the directed velocity of plasma electrons v{sub 0} to their thermal velocity v{sub T}. It is shown that the voltage fall in a one-dimensional collisionless anode sheath is always negative. At the small values of v{sub 0}/v{sub T}, the obtained expression asymptotically transforms into the Langmuir formula. Generalized Bohm’s criterion for an electric discharge with allowance for the space charge density ρ(0), electric field E(0), ion velocity v{sub i}(0),more » and ratio v{sub 0}/v{sub T} at the plasma-sheath interface is formulated. It is shown that the minimum value of the ion velocity v{sub i}{sup *}(0) corresponds to the vanishing of the electric field at one point inside the sheath. The dependence of v{sub i}{sup *} (0) on ρ(0), E(0), and v{sub 0}/v{sub T} determines the boundary of the existence domain of stationary solutions in the sheath. Using this criterion, the maximum possible degree of contraction of the electron current at the anode is determined for a short high-current vacuum arc discharge.« less

  12. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  13. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  14. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  15. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  16. Clinical study on natural gingival color.

    PubMed

    Gómez-Polo, Cristina; Montero, Javier; Gómez-Polo, Miguel; Martín Casado, Ana María

    2018-05-29

    The aims of the study were: to describe the gingival color surrounding the upper incisors in three sites in the keratinized gingiva, analyzing the effect of possible factors which modulate (socio-demographic and behavioral) intersubject variability; to study whether the gingiva color is the same in all three locations and to describe intrasubject color differences in the keratinized gingiva band. Using the CIELAB color system, three reference areas (free gingival margin, keratinized gingival body, and birth or upper part of the keratinized gingiva) were studied in 259 individuals, as well as the related socio-demographic factors, oral habits and the chronic intake of medication. Shadepilot™ spectrophotometer was used. Descriptive and inferential statistical analysis was performed. There are statistically significant differences between males and females for coordinates L* and a* in the middle and free gingival margin. For the b* coordinate, there are differences between males and females in the three locations studied (p < 0.05). The minimum and maximum coordinates in which the CIELAB natural gingival space is delimited are L* minima 28.3, L* maximum 65.4, a* minimum 11.1, a* maximum 37.2, b* minimum 6.9, and b* maximum 25.2*. Age, smoking, and the chronic intake of medication had no significant effect on gum color. There are perceptible color differences within the keratinized gingiva band. These chromatic differences must be taken into account if the prosthetic characterization of gingival tissue is to be considered acceptable. There are significant differences between the color coordinates of the three sites studied in the keratinized gingiva of men and women.

  17. Margins of safety provided by COSHH Essentials and the ILO Chemical Control Toolkit.

    PubMed

    Jones, Rachael M; Nicas, Mark

    2006-03-01

    COSHH Essentials, developed by the UK Health and Safety Executive, and the Chemical Control Toolkit (Toolkit) proposed by the International Labor Organization, are 'control banding' approaches to workplace risk management intended for use by proprietors of small and medium-sized businesses. Both systems group chemical substances into hazard bands based on toxicological endpoint and potency. COSSH Essentials uses the European Union's Risk-phrases (R-phrases), whereas the Toolkit uses R-phrases and the Globally Harmonized System (GHS) of Classification and Labeling of Chemicals. Each hazard band is associated with a range of airborne concentrations, termed exposure bands, which are to be attained by the implementation of recommended control technologies. Here we analyze the margin of safety afforded by the systems and, for each hazard band, define the minimal margin as the ratio of the minimum airborne concentration that produced the toxicological endpoint of interest in experimental animals to the maximum concentration in workplace air permitted by the exposure band. We found that the minimal margins were always <100, with some ranging to <1, and inversely related to molecular weight. The Toolkit-GHS system generally produced margins equal to or larger than COSHH Essentials, suggesting that the Toolkit-GHS system is more protective of worker health. Although, these systems predict exposures comparable with current occupational exposure limits, we argue that the minimal margins are better indicators of health protection. Further, given the small margins observed, we feel it is important that revisions of these systems provide the exposure bands to users, so as to permit evaluation of control technology capture efficiency.

  18. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  19. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  20. 26 CFR 1.994-2 - Marginal costing rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... labor 20.00 (iii) Total deductions 60.00 (c) Maximum combined taxable income 25.00 (4) Overall profit... qualify as export promotion expenses may be so claimed as export promotion expenses. (3) Overall profit... (determined under § 1.993-6) of the DISC derived from such sales, multiplied by the overall profit percentage...

  1. A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment

    ERIC Educational Resources Information Center

    Finch, Holmes; Monahan, Patrick

    2008-01-01

    This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…

  2. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  3. Self-Reported Well-Being of Women and Men with Intellectual Disabilities in England

    ERIC Educational Resources Information Center

    Emerson, Eric; Hatton, Chris

    2008-01-01

    We investigated the association between indicators of subjective well-being and the personal characteristics, socioeconomic position, and social relationships of a sample of 1,273 English adults with intellectual disabilities. Mean overall happiness with life was 71% of the scale maximum, a figure only marginally lower than typically reported…

  4. Semiparametric Item Response Functions in the Context of Guessing

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2016-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  5. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  6. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  7. Climatic significance of the ostracode fauna from the Pliocene Kap Kobenhavn Formation, north Greenland

    USGS Publications Warehouse

    Brouwers, E.M.; Jorgensen, N.O.; Cronin, T. M.

    1991-01-01

    The Kap Kobenhavn Formation crops out in Greenland at 80??N latitude and marks the most northerly onshore Pliocene locality known. The sands and silts that comprise the formation were deposited in marginal marine and shallow marine environments. An abundant and diverse vertebrate and invertebrate fauna and plant megafossil flora provide age and paleoclimatic constraints. The age estimated for the Kap Kobenhavn ranges from 2.0 to 3.0 million years old. Winter and summer bottom water paleotemperatures were estimated on the basis of the ostracode assemblages. The marine ostracode fauna in units B1 and B2 indicate a subfrigid to frigid marine climate, with estimated minimum sea bottom temperatures (SBT) of -2??C and estimated maximum SBT of 6-8??C. Sediments assigned to unit B2 at locality 72 contain a higher proportion of warm water genera, and the maximum SBT is estimated at 9-10??C. The marginal marine fauna in the uppermost unit B3 (locality 68) indicates a cold temperate to subfrigid marine climate, with an estimated minimum SBT of -2??C and an estimated maximum SBT ranging as high as 12-14??C. These temperatures indicated that, on the average, the Kap Kobenhavn winters in the late Pliocene were similar to or perhaps 1-2??C warmer than winters today and that summer temperatures were 7-8??C warmer than today. -from Authors

  8. Failure Assessment of Stainless Steel and Titanium Brazed Joints

    NASA Technical Reports Server (NTRS)

    Flom, Yury A.

    2012-01-01

    Following successful application of Coulomb-Mohr and interaction equations for evaluation of safety margins in Albemet 162 brazed joints, two additional base metal/filler metal systems were investigated. Specimens consisting of stainless steel brazed with silver-base filler metal and titanium brazed with 1100 Al alloy were tested to failure under combined action of tensile, shear, bending and torsion loads. Finite Element Analysis (FEA), hand calculations and digital image comparison (DIC) techniques were used to estimate failure stresses and construct Failure Assessment Diagrams (FAD). This study confirms that interaction equation R(sub sigma) + R(sub tau) = 1, where R(sub sigma) and R(sub t u) are normal and shear stress ratios, can be used as conservative lower bound estimate of the failure criterion in stainless steel and titanium brazed joints.

  9. CT differentiation of 1-2-cm gallbladder polyps: benign vs malignant.

    PubMed

    Song, E Rang; Chung, Woo-Suk; Jang, Hye Young; Yoon, Minjae; Cha, Eun Jung

    2014-04-01

    To evaluate MDCT findings of 1-2-cm sized gallbladder (GB) polyps for differentiation between benign and malignant polyps. Institutional review board approval was obtained, and informed consent was waived. Portal venous phase CT scans of 1-2-cm sized GB polyps caused by various pathologic conditions were retrospectively reviewed by two blinded observers. Among the 36 patients identified, 21 had benign polyps with the remaining 15 having malignant polyps. Size, margin, and shape of GB polyps were evaluated. Attenuation values of the polyps, including mean attenuation, maximum attenuation, and standard deviation, were recorded. As determined by visual inspection, the degree of polyp enhancement was evaluated. Using these CT findings, each of the two radiologists assessed and recorded individual diagnostic confidence for differentiating benign versus malignant polyps on a 5-point scale. The diagnostic performance of CT was evaluated using a receiver operating characteristic curve analysis. There was no significant difference in size between benign and malignant GB polyps. Ill-defined margin and sessile morphology were significantly associated with malignant polyp. There was a significant difference in mean and maximum attenuation values between benign and malignant GB polyps. Mean standard deviation value of malignant polyps was significantly higher than that of benign polyps. All malignant polyps showed either hyperenhancement or marked hyperenhancement. A z value for the diagnosis of malignant GB polyps was 0.905. Margin, shape, and enhancement degree are helpful in differentiating between benign and malignant polyps of 1-2-cm sizes.

  10. Gross mismatch between thermal tolerances and environmental temperatures in a tropical freshwater snail: climate warming and evolutionary implications.

    PubMed

    Polgar, Gianluca; Khang, Tsung Fei; Chua, Teddy; Marshall, David J

    2015-01-01

    The relationship between acute thermal tolerance and habitat temperature in ectotherm animals informs about their thermal adaptation and is used to assess thermal safety margins and sensitivity to climate warming. We studied this relationship in an equatorial freshwater snail (Clea nigricans), belonging to a predominantly marine gastropod lineage (Neogastropoda, Buccinidae). We found that tolerance of heating and cooling exceeded average daily maximum and minimum temperatures, by roughly 20°C in each case. Because habitat temperature is generally assumed to be the main selective factor acting on the fundamental thermal niche, the discordance between thermal tolerance and environmental temperature implies trait conservation following 'in situ' environmental change, or following novel colonisation of a thermally less-variable habitat. Whereas heat tolerance could relate to an historical association with the thermally variable and extreme marine intertidal fringe zone, cold tolerance could associate with either an ancestral life at higher latitudes, or represent adaptation to cooler, higher-altitudinal, tropical lotic systems. The broad upper thermal safety margin (difference between heat tolerance and maximum environmental temperature) observed in this snail is grossly incompatible with the very narrow safety margins typically found in most terrestrial tropical ectotherms (insects and lizards), and hence with the emerging prediction that tropical ectotherms, are especially vulnerable to environmental warming. A more comprehensive understanding of climatic vulnerability of animal ectotherms thus requires greater consideration of taxonomic diversity, ecological transition and evolutionary history. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Dynamic Response and Residual Helmet Liner Crush Using Cadaver Heads and Standard Headforms.

    PubMed

    Bonin, S J; Luck, J F; Bass, C R; Gardiner, J C; Onar-Thomas, A; Asfour, S S; Siegmund, G P

    2017-03-01

    Biomechanical headforms are used for helmet certification testing and reconstructing helmeted head impacts; however, their biofidelity and direct applicability to human head and helmet responses remain unclear. Dynamic responses of cadaver heads and three headforms and residual foam liner deformations were compared during motorcycle helmet impacts. Instrumented, helmeted heads/headforms were dropped onto the forehead region against an instrumented flat anvil at 75, 150, and 195 J. Helmets were CT scanned to quantify maximum liner crush depth and crush volume. General linear models were used to quantify the effect of head type and impact energy on linear acceleration, head injury criterion (HIC), force, maximum liner crush depth, and liner crush volume and regression models were used to quantify the relationship between acceleration and both maximum crush depth and crush volume. The cadaver heads generated larger peak accelerations than all three headforms, larger HICs than the International Organization for Standardization (ISO), larger forces than the Hybrid III and ISO, larger maximum crush depth than the ISO, and larger crush volumes than the DOT. These significant differences between the cadaver heads and headforms need to be accounted for when attempting to estimate an impact exposure using a helmet's residual crush depth or volume.

  12. Inferring Phylogenetic Networks Using PhyloNet.

    PubMed

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  13. Analysis and Evaluation of Parameters Determining Maximum Efficiency of Fish Protection

    NASA Astrophysics Data System (ADS)

    Khetsuriani, E. D.; Kostyukov, V. P.; Khetsuriani, T. E.

    2017-11-01

    The article is concerned with experimental research findings. The efficiency of fish fry protection from entering water inlets is the main criterion of any fish protection facility or device. The research was aimed to determine an adequate mathematical model E = f(PCT, Vp, α), where PCT, Vp and α are controlled factors influencing the process of fish fry protection. The result of the processing of experimental data was an adequate regression model. We determined the maximum of fish protection Emax=94,21 and the minimum of optimization function Emin=44,41. As a result of the statistical processing of experimental data we obtained adequate dependences for determining an optimal rotational speed of tip and fish protection efficiency. The analysis of fish protection efficiency dependence E% = f(PCT, Vp, α) allowed the authors to recommend the following optimized operating modes for it: the maximum fish protection efficiency is achieved at the process pressure PCT=3 atm, stream velocity Vp=0,42 m/s and nozzle inclination angle α=47°49’. The stream velocity Vp has the most critical influence on fish protection efficiency. The maximum efficiency of fish protection is obtained at the tip rotational speed of 70.92 rpm.

  14. The Seismicity of Two Hyperextended Margins

    NASA Astrophysics Data System (ADS)

    Redfield, Tim; Terje Osmundsen, Per

    2013-04-01

    A seismic belt marks the outermost edge of Scandinavia's proximal margin, inboard of and roughly parallel to the Taper Break. A similar near- to onshore seismic belt runs along its inner edge, roughly parallel to and outboard of the asymmetric, seaward-facing escarpment. The belts converge at both the northern and southern ends of Scandinavia, where crustal taper is sharp and the proximal margin is narrow. Very few seismic events have been recorded on the intervening, gently-tapering Trøndelag Platform. Norway's distribution of seismicity is systematically ordered with respect to 1) the structural templates of high-beta extension that shaped the thinning gradient during Late Jurassic or Early Cretaceous time, and 2) the topographically resurgent Cretaceous-Cenozoic "accommodation phase" family of escarpments that approximate the innermost limit of crustal thinning [See Redfield and Osmundsen (2012) for diagrams, definitions, discussion, and supporting citations.] Landwards from the belt of earthquake epicenters that mark the Taper Break the crust consistently thickens, and large fault arrays tend to sole out at mid crustal levels. Towards the sea the crystalline continental crust is hyperextended, pervasively faulted, and generally very thin. Also, faulting and serpentinization may have affected the uppermost parts of the distal margin's lithospheric mantle. Such contrasting structural conditions may generate a contrasting stiffness: for a given stress, more strain can be accommodated in the distal margin than in the less faulted proximal margin. By way of comparison, inboard of the Taper Break on the gently-tapered Trøndelag Platform, faulting was not penetrative. There, similar structural conditions prevail and proximal margin seismicity is negligible. Because stress concentration can occur where material properties undergo significant contrast, the necking zone may constitute a natural localization point for post-thinning phase earthquakes. In Scandinavia, loads generated by escarpment erosion, offshore sedimentary deposition, and post-glacial rebound have been periodically superimposed throughout the Neogene. Their vertical stress patterns are mutually-reinforcing during deglaciation. However, compared to the post-glacial dome the pattern of maximum uplift/unloading generated by escarpment erosion will be longer, more linear, and located atop the emergent proximal margin. The pattern of offshore maximum deposition/loading will be similar. This may help explain the asymmetric expenditure of Fennoscandia's annual seismic energy budget. It may also help explain the obvious Conundrum: if stress generated by erosion and deposition is sufficiently great, fault reactivation and consequent seismicity can occur at any hyperextended passive margin sector regardless of its glacial history. Onshore Scandinavia, episodic footwall uplift and escarpment rejuvenation may have been driven by just such a mechanism throughout much of the later Cretaceous and Cenozoic. SE Brasil offers a glimpse of how Norway's hyperextended margin might manifest itself seismically in the absence of post-glacial rebound. Compilations suggest two seismic belts may exist. One, offshore, follows the thinned crust of the ultra-deep, hyperextended Campos and Santos basins. Onshore, earthquakes occur more commonly in the elevated highlands of the escarpments, and track especially the long, linear ranges such as the Serra de Mantiquiera and Serra do Espinhaço. Seismicity is more rare in the coastal lowlands, and largely absent in the Brasilian hinterland. Although never glaciated since the time of hyperextension and characterized by significantly fewer earthquakes in toto, SE Brasil's pattern of seismicity closely mimics Scandinavia. Commencing after perhaps just a few tens of millions of years of 'sag' basin infill, accommodation phase fault reactivation and footwall uplift at passive margins is the inexorable product of hyperextension. CITATIONS Redfield, T.F. and P.T. Osmundsen, 2012, GSA Bulletin, doi: 10.1130/B30691.1

  15. Algorithm for the Evaluation of Imperfections in Auto Bodywork Using Profiles from a Retroreflective Image

    PubMed Central

    Barber, Ramon; Zwilling, Valerie; Salichs, Miguel A.

    2014-01-01

    Nowadays the automobile industry is becoming more and more demanding as far as quality is concerned. Within the wide variety of processes in which this quality must be ensured, those regarding the squeezing of the auto bodywork are especially important due to the fact that the quality of the resulting product is tested manually by experts, leading to inaccuracies of all types. In this paper, an algorithm is proposed for the automated evaluation of the imperfections in the sheets of the bodywork after the squeezing process. The algorithm processes the profile signals from a retroreflective image and characterizes an imperfection. It is based on a convergence criterion that follows the line of the maximum gradient of the imperfection and gives its geometrical characteristics as a result: maximum gradient, length, width, and area. PMID:24504105

  16. A Relationship Between Constraint and the Critical Crack Tip Opening Angle

    NASA Technical Reports Server (NTRS)

    Johnston, William M.; James, Mark A.

    2009-01-01

    Of the various approaches used to model and predict fracture, the Crack Tip Opening Angle (CTOA) fracture criterion has been successfully used for a wide range of two-dimensional thin-sheet and thin plate applications. As thicker structure is considered, modeling the full three-dimensional fracture process will become essential. This paper investigates relationships between the local CTOA evaluated along a three-dimensional crack front and the corresponding local constraint. Previously reported tunneling crack front shapes were measured during fracture by pausing each test and fatigue cycling the specimens to mark the crack surface. Finite element analyses were run to model the tunneling shape during fracture, with the analysis loading conditions duplicating those tests. The results show an inverse relationship between the critical fracture value and constraint which is valid both before maximum load and after maximum load.

  17. Algorithm for the evaluation of imperfections in auto bodywork using profiles from a retroreflective image.

    PubMed

    Barber, Ramon; Zwilling, Valerie; Salichs, Miguel A

    2014-02-05

    Nowadays the automobile industry is becoming more and more demanding as far as quality is concerned. Within the wide variety of processes in which this quality must be ensured, those regarding the squeezing of the auto bodywork are especially important due to the fact that the quality of the resulting product is tested manually by experts, leading to inaccuracies of all types. In this paper, an algorithm is proposed for the automated evaluation of the imperfections in the sheets of the bodywork after the squeezing process. The algorithm processes the profile signals from a retroreflective image and characterizes an imperfection. It is based on a convergence criterion that follows the line of the maximum gradient of the imperfection and gives its geometrical characteristics as a result: maximum gradient, length, width, and area.

  18. SU-F-J-17: Patient Localization Using MRI-Guided Soft Tissue for Head-And-Neck Radiotherapy: Indication for Margin Reduction and Its Feasibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, X; Yang, Y; Jack, N

    Purpose: On-board MRI provides superior soft-tissue contrast, allowing patient alignment using tumor or nearby critical structures. This study aims to study H&N MRI-guided IGRT to analyze inter-fraction patient setup variations using soft-tissue targets and design appropriate CTV-to-PTV margin and clinical implication. Methods: 282 MR images for 10 H&N IMRT patients treated on a ViewRay system were retrospectively analyzed. Patients were immobilized using a thermoplastic mask on a customized headrest fitted in a radiofrequency coil and positioned to soft-tissue targets. The inter-fraction patient displacements were recorded to compute the PTV margins using the recipe: 2.5∑+0.7σ. New IMRT plans optimized on themore » revised PTVs were generated to evaluate the delivered dose distributions. An in-house dose deformation registration tool was used to assess the resulting dosimetric consequences when margin adaption is performed based on weekly MR images. The cumulative doses were compared to the reduced margin plans for targets and critical structures. Results: The inter-fraction displacements (and standard deviations), ∑ and σ were tabulated for MRI and compared to kVCBCT. The computed CTV-to-PTV margin was 3.5mm for soft-tissue based registration. There were minimal differences between the planned and delivered doses when comparing clinical and the PTV reduced margin plans: the paired t-tests yielded p=0.38 and 0.66 between the planned and delivered doses for the adapted margin plans for the maximum cord and mean parotid dose, respectively. Target V95 received comparable doses as planned for the reduced margin plans. Conclusion: The 0.35T MRI offers acceptable soft-tissue contrast and good spatial resolution for patient alignment and target visualization. Better tumor conspicuity from MRI allows soft-tissue based alignments with potentially improved accuracy, suggesting a benefit of margin reduction for H&N radiotherapy. The reduced margin plans (i.e., 2 mm) resulted in improved normal structure sparing and accurate dose delivery to achieve intended treatment goal under MR guidance.« less

  19. On the computational aspects of comminution in discrete element method

    NASA Astrophysics Data System (ADS)

    Chaudry, Mohsin Ali; Wriggers, Peter

    2018-04-01

    In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.

  20. Low thrust spacecraft transfers optimization method with the stepwise control structure in the Earth-Moon system in terms of the L1-L2 transfer

    NASA Astrophysics Data System (ADS)

    Fain, M. K.; Starinova, O. L.

    2016-04-01

    The paper outlines the method for determination of the locally optimal stepwise control structure in the problem of the low thrust spacecraft transfer optimization in the Earth-Moon system, including the L1-L2 transfer. The total flight time as an optimization criterion is considered. The optimal control programs were obtained by using the Pontryagin's maximum principle. As a result of optimization, optimal control programs, corresponding trajectories, and minimal total flight times were determined.

  1. Reliability analysis of structural ceramics subjected to biaxial flexure

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1991-01-01

    The reliability of alumina disks subjected to biaxial flexure is predicted on the basis of statistical fracture theory using a critical strain energy release rate fracture criterion. Results on a sintered silicon nitride are consistent with reliability predictions based on pore-initiated penny-shaped cracks with preferred orientation normal to the maximum principal stress. Assumptions with regard to flaw types and their orientations in each ceramic can be justified by fractography. It is shown that there are no universal guidelines for selecting fracture criteria or assuming flaw orientations in reliability analyses.

  2. Interface stability in a slowly rotating low-gravity tank Theory

    NASA Technical Reports Server (NTRS)

    Gans, R. F.; Leslie, F. W.

    1986-01-01

    The equilibrium configuration of a bubble in a rotating liquid confined by flat axial boundaries (baffles) is found. The maximum baffle spacing assuring bubble confinement is bounded from above by the natural length of a bubble in an infinite medium under the same conditions. Effects of nonzero contact angle are minimal. The problem of dynamic stability is posed. It can be solved in the limit of rapid rotation, for which the bubble is a long cylinder. Instability is to axisymmetric perturbations; nonaxisymmetric perturbations are stable. The stability criterion agrees with earlier results.

  3. A comparison of pay-as-bid and marginal pricing in electricity markets

    NASA Astrophysics Data System (ADS)

    Ren, Yongjun

    This thesis investigates the behaviour of electricity markets under marginal and pay-as-bid pricing. Marginal pricing is believed to yield the maximum social welfare and is currently implemented by most electricity markets. However, in view of recent electricity market failures, pay-as-bid has been extensively discussed as a possible alternative to marginal pricing. In this research, marginal and pay-as-bid pricing have been analyzed in electricity markets with both perfect and imperfect competition. The perfect competition case is studied under both exact and uncertain system marginal cost prediction. The comparison of the two pricing methods is conducted through two steps: (i) identify the best offer strategy of the generating companies (gencos); (ii) analyze the market performance under these optimum genco strategies. The analysis results together with numerical simulations show that pay-as-bid and marginal pricing are equivalent in a perfect market with exact system marginal cost prediction. In perfect markets with uncertain demand prediction, the two pricing methods are also equivalent but in an expected value sense. If we compare from the perspective of second order statistics, all market performance measures exhibit much lower values under pay-as-bid than under marginal pricing. The risk of deviating from the mean is therefore much higher under marginal pricing than under pay-as-bid. In an imperfect competition market with exact demand prediction, the research shows that pay-as-bid pricing yields lower consumer payments and lower genco profits. This research provides quantitative evidence that challenges some common claims about pay-as-bid pricing. One is that under pay-as-bid, participants would soon learn how to offer so as to obtain the same or higher profits than what they would have obtained under marginal pricing. This research however shows that, under pay-as-bid, participants can at best earn the same profit or expected profit as under marginal pricing. A second common claim refuted by this research is that pay-as-bid does not provide correct price signals if there is a scarcity of generation resources. We show that pay-as-bid does provide a price signal with such characteristics and furthermore argue that the price signal under marginal pricing with gaming may not necessarily be correct since it would then not reflect a lack of generation capacity but a desire to increase profit.

  4. A class of optimum digital phase locked loops

    NASA Technical Reports Server (NTRS)

    Kumar, R.; Hurd, W. J.

    1986-01-01

    This paper presents a class of optimum digital filters for digital phase locked loops, for the important case in which the maximum update rate of the loop filter and numerically controlled oscillator (NCO) is limited. This case is typical when the loop filter is implemented in a microprocessor. In these situations, pure delay is encountered in the loop transfer function and thus the stability and gain margin of the loop are of crucial interest. The optimum filters designed for such situations are evaluated in terms of their gain margin for stability, dynamic error, and steady-state error performance. For situations involving considerably high phase dynamics an adaptive and programmable implementation is also proposed to obtain an overall optimum strategy.

  5. Overdentures on natural teeth: a new approach.

    PubMed

    Previgliano, V; Barone Monfrin, S; Santià, G; Preti, G

    2004-01-01

    The study presents a new type of copings for overdentures on natural teeth. A new type of custom-made copings was prepared on 10 extracted teeth and their marginal fit was observed microscopically by means of a mechanical device, and software was employed to measure the gap. The marginal fit evaluation gave satisfactory values with mean values of the gap measurements below the clinically accepted limits (mean gap: 25.3 microm; minimum 7.3 microm, maximum 56.5 microm). The advantages of these new copings are: the rapidity of their preparation; the protection of the root canal treatment, because the coping with this chair-side method is prepared and cemented in one session; the low costs.

  6. Towards improving searches for optimal phylogenies.

    PubMed

    Ford, Eric; St John, Katherine; Wheeler, Ward C

    2015-01-01

    Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Modeling pollution potential input from the drainage basin into Barra Bonita reservoir, São Paulo - Brazil.

    PubMed

    Prado, R B; Novo, E M L M

    2015-05-01

    In this study multi-criteria modeling tools are applied to map the spatial distribution of drainage basin potential to pollute Barra Bonita Reservoir, São Paulo State, Brasil. Barra Bonita Reservoir Basin had undergone intense land use/land cover changes in the last decades, including the fast conversion from pasture into sugarcane. In this respect, this study answers to the lack of information about the variables (criteria) which affect the pollution potential of the drainage basin by building a Geographic Information System which provides their spatial distribution at sub-basin level. The GIS was fed by several data (geomorphology, pedology, geology, drainage network and rainfall) provided by public agencies. Landsat satellite images provided land use/land cover map for 2002. Ratings and weights of each criterion defined by specialists supported the modeling process. The results showed a wide variability in the pollution potential of different sub-basins according to the application of different criterion. If only land use is analyzed, for instance, less than 50% of the basin is classified as highly threatening to water quality and include sub basins located near the reservoir, indicating the importance of protection areas at the margins. Despite the subjectivity involved in the weighing processes, the multi-criteria analysis model allowed the simulation of scenarios which support rational land use polices at sub-basin level regarding the protection of water resources.

  8. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  9. Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2015-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  10. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  11. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    ERIC Educational Resources Information Center

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  12. Description and Phylogeny of Urostyla grandis wiackowskii subsp. nov. (Ciliophora, Hypotricha) from an Estuarine Mangrove in Brazil.

    PubMed

    Paiva, Thiago da Silva; Shao, Chen; Fernandes, Noemi Mendes; Borges, Bárbara do Nascimento; da Silva-Neto, Inácio Domingos

    2016-01-01

    Interphase specimens, aspects of physiological reorganization and divisional morphogenesis were investigated in a strain of a hypotrichous ciliate highly similar to Urostyla grandis Ehrenberg, (type species of Urostyla), collected from a mangrove area in the estuary of the Paraíba do Sul river (Rio de Janeiro, Brazil). The results revealed that albeit interphase specimens match with the known morphologic variability in U. grandis, morphogenetic processes have conspicuous differences. Parental adoral zone is entirely renewed during morphogenesis, and marginal cirri exhibit a unique combination of developmental modes, in which left marginal rows originate from multiple anlagen arising from innermost left marginal cirral row, whereas right marginal ciliature originates from individual within-row anlagen. Based on such characteristics, a new subspecies, namely U. grandis wiackowskii subsp. nov. is proposed, and consequently, U. grandis grandis Ehrenberg, stat. nov. is established. Bayesian and maximum-likelihood analyses of the 18S rDNA unambiguously placed U. grandis wiackowskii as adelphotaxon of a cluster formed by other U. grandis sequences. The implications of such findings to the systematics of Urostyla are discussed. © 2015 The Author(s) Journal of Eukaryotic Microbiology © 2015 International Society of Protistologists.

  13. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  15. Distillation of secret-key from a class of compound memoryless quantum sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de

    We consider secret-key distillation from tripartite compound classical-quantum-quantum (cqq) sources with free forward public communication under strong security criterion. We design protocols which are universally reliable and secure in this scenario. These are shown to achieve asymptotically optimal rates as long as a certain regularity condition is fulfilled by the set of its generating density matrices. We derive a multi-letter formula which describes the optimal forward secret-key capacity for all compound cqq sources being regular in this sense. We also determine the forward secret-key distillation capacity for situations where the legitimate sending party has perfect knowledge of his/her marginal statemore » deriving from the source statistics. In this case regularity conditions can be dropped. Our results show that the capacities with and without the mentioned kind of state knowledge are equal as long as the source is generated by a regular set of density matrices. We demonstrate that regularity of cqq sources is not only a technical but also an operational issue. For this reason, we give an example of a source which has zero secret-key distillation capacity without sender knowledge, while achieving positive rates is possible if sender marginal knowledge is provided.« less

  16. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  17. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    NASA Astrophysics Data System (ADS)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  18. Immediate performance of self-etching versus system adhesives with multiple light-activated restoratives.

    PubMed

    Irie, M; Suzuki, K; Watts, D C

    2004-11-01

    The purpose of this study was to evaluate the performance of both single and double applications of (Adper Prompt L-Pop) self-etching dental adhesive, when used with three classes of light-activated restorative materials, in comparison to the performance of each restorative system adhesive. Evaluation parameters to be considered for the adhesive systems were (a) immediate marginal adaptation (or gap formation) in tooth cavities, (b) free setting shrinkage-strain determined by the immediate marginal gap-width in a non-bonding Teflon cavity, and (c) their immediate shear bond-strengths to enamel and to dentin. The maximum marginal gap-width and the opposing-width (if any) in the tooth cavities and in the Teflon cavities were measured immediately (3 min) after light-activation. The shear bond-strengths to enamel and to dentin were also measured at 3 min. For light-activated restorative materials during early setting (<3 min), application of Adper Prompt L-Pop exhibited generally superior marginal adaptation to most system adhesives. But there was no additional benefit from double application. The marginal-gaps in tooth cavities and the marginal-gaps in Teflon cavities were highly correlated (r = 0.86-0.89, p < 0.02-0.01). For enamel and dentin shear bond-strengths, there were no significant differences between single and double applications, for all materials tested except Toughwell and Z 250 with enamel. Single application of a self-etch adhesive was a feasible and beneficial alternative to system adhesives for several classes of restorative. Marginal gap-widths in tooth cavities correlated more strongly with free shrinkage-strain magnitudes than with bond-strengths to tooth structure.

  19. Crustal geometry of the northeastern Gulf of Aden passive margin: localization of the deformation inferred from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Tiberi, C.; Leroy, S.; d'Acremont, E.; Bellahsen, N.; Ebinger, C.; Al-Lazki, A.; Pointu, A.

    2007-03-01

    Here we use receiver function analysis to retrieve crustal thickness and crustal composition along the 35-My-old passive margin of the eastern Gulf of Aden. Our aims are to use results from the 3-D seismic array to map crustal stretching across and along the Aden margin in southern Oman. The array recorded local and teleseismic events between 2003 March and 2004 March. Seventy-eight events were used in our joint inversions for Vp/Vs ratio and depth. The major results are: (1) Crustal thickness decreases from the uplifted rift flank of the margin towards the Sheba mid-ocean ridge. We found a crustal thickness of about 35 km beneath the northern rift flank. This value decreases sharply to 26 km beneath the post-rift subsidence zone on the Salalah coastal plain. This 10 km of crustal thinning occurs across a horizontal distance of less than 30 km showing a localization of the crustal thinning below the first known rifted block of the margin. (2) A second rift margin transect located about 50 km to the east shows no thinning from the coast to 50 km onshore. The lack of crustal thickness variation indicates that the maximum crustal stretching could be restricted to offshore regions. (3) The along-strike variations in crustal structure demonstrate the scale and longevity of the regular along-axis rift segmentation. (4) Extension is still observed north of the rifted domain, 70 km onshore from the coast, making the width of margin larger than first expected from geology. (5) The crust has a felsic to normal composition with a probably strong effect of the sedimentary layer on the Vp/Vs ratio (comprised between 1.67 and 1.91).

  20. Controls of tectonics and sediment source locations on along-strike variations in transgressive deposits on the northern California margin

    USGS Publications Warehouse

    Spinelli, G.A.; Field, M.E.

    2003-01-01

    We identify two surfaces in the shallow subsurface on the Eel River margin offshore northern California, a lowstand erosion surface, likely formed during the last glacial maximum, and an overlying surface likely formed during the most recent transgression of the shoreline. The lowstand erosion surface, which extends from the inner shelf to near the shelfbreak and from the Eel River to Trinidad Head (???80 km), truncates underlying strata on the shelf. Above the surface, inferred transgressive coastal and estuarine sedimentary units separate it from the transgressive surface on the shelf. Early in the transgression, Eel River sediment was likely both transported down the Eel Canyon and dispersed on the slope, allowing transgressive coastal sediment from the smaller Mad River to accumulate in a recognizable deposit on the shelf. The location of coastal Mad River sediment accumulation was controlled by the location of the paleo-Mad River. Throughout the remainder of the transgression, dispersed sediment from the Eel River accumulated an average of 20 m of onlapping shelf deposits. The distribution and thickness of these transgressive marine units was strongly modified by northwest-southeast trending folds. Thick sediment packages accumulated over structural lows in the lowstand surface. The thinnest sediment accumulations (0-10 m) were deposited over structural highs along faults and uplifting anticlines. The Eel margin, an active margin with steep, high sediment-load streams, has developed a thick transgressive systems tract. On this margin sediment accumulates as rapidly as the processes of uplift and downwarp locally create and destroy accommodation space. Sequence stratigraphic models of tectonically active margins should account for variations in accommodation space along margins as well as across them. ?? 2003 Elsevier Science B.V. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Brandon T., E-mail: Brandon.Nguyen@act.gov.au; Canberra Hospital, Radiation Oncology Department, Garran, ACT; Deb, Siddhartha

    Purpose: To determine an appropriate clinical target volume for partial breast radiation therapy (PBRT) based on the spatial distribution of residual invasive and in situ carcinoma after wide local excision (WLE) for early breast cancer or ductal carcinoma in situ (DCIS). Methods and Materials: We performed a prospective pathologic study of women potentially eligible for PBRT who had re-excision and/or completion mastectomy after WLE for early breast cancer or DCIS. A pathologic assessment protocol was used to determine the maximum radial extension (MRE) of residual carcinoma from the margin of the initial surgical cavity. Women were stratified by the closestmore » initial radial margin width: negative (>1 mm), close (>0 mm and {<=}1 mm), or involved. Results: The study population was composed of 133 women with a median age of 59 years (range, 27-82 years) and the following stage groups: 0 (13.5%), I (40.6%), II (38.3%), and III (7.5%). The histologic subtypes of the primary tumor were invasive ductal carcinoma (74.4%), invasive lobular carcinoma (12.0%), and DCIS alone (13.5%). Residual carcinoma was present in the re-excision and completion mastectomy specimens in 55.4%, 14.3%, and 7.2% of women with an involved, close, and negative margin, respectively. In the 77 women with a noninvolved radial margin, the MRE of residual disease, if present, was {<=}10 mm in 97.4% (95% confidence interval 91.6-99.5) of cases. Larger MRE measurements were significantly associated with an involved margin (P<.001), tumor size >30 mm (P=.03), premenopausal status (P=.03), and negative progesterone receptor status (P=.05). Conclusions: A clinical target volume margin of 10 mm would encompass microscopic residual disease in >90% of women potentially eligible for PBRT after WLE with noninvolved resection margins.« less

  2. Limitations of the planning organ at risk volume (PRV) concept.

    PubMed

    Stroom, Joep C; Heijmen, Ben J M

    2006-09-01

    Previously, we determined a planning target volume (PTV) margin recipe for geometrical errors in radiotherapy equal to M(T) = 2 Sigma + 0.7 sigma, with Sigma and sigma standard deviations describing systematic and random errors, respectively. In this paper, we investigated margins for organs at risk (OAR), yielding the so-called planning organ at risk volume (PRV). For critical organs with a maximum dose (D(max)) constraint, we calculated margins such that D(max) in the PRV is equal to the motion averaged D(max) in the (moving) clinical target volume (CTV). We studied margins for the spinal cord in 10 head-and-neck cases and 10 lung cases, each with two different clinical plans. For critical organs with a dose-volume constraint, we also investigated whether a margin recipe was feasible. For the 20 spinal cords considered, the average margin recipe found was: M(R) = 1.6 Sigma + 0.2 sigma with variations for systematic and random errors of 1.2 Sigma to 1.8 Sigma and -0.2 sigma to 0.6 sigma, respectively. The variations were due to differences in shape and position of the dose distributions with respect to the cords. The recipe also depended significantly on the volume definition of D(max). For critical organs with a dose-volume constraint, the PRV concept appears even less useful because a margin around, e.g., the rectum changes the volume in such a manner that dose-volume constraints stop making sense. The concept of PRV for planning of radiotherapy is of limited use. Therefore, alternative ways should be developed to include geometric uncertainties of OARs in radiotherapy planning.

  3. Numerical modelling of edge-driven convection during rift-to-drift transition: application to the Red Sea

    NASA Astrophysics Data System (ADS)

    Fierro, Elisa; Capitanio, Fabio A.; Schettino, Antonio; Morena Salerno, V.

    2017-04-01

    We use numerical modeling to investigate the coupling of mantle instabilities and surface tectonics along lithospheric steps developing during rifting. We address whether edge driven convection (EDC) beneath rifted continental margins and shear flow during rift-drift transition can play a role in the observed post-rift compressive tectonic evolution of the divergent continental margins along the Red Sea. We run a series of 2D simulations to examine the relationship between the maximum compression and key geometrical parameters of the step beneath continental margins, such as the step height due to lithosphere thickness variation and the width of the margins, and test the effect of rheology varying temperature- and stress-dependent viscosity in the lithosphere and asthenosphere. The development of instabilities is initially illustrated as a function of these parameters, to show the controls on the lithosphere strain distribution and magnitude. We then address the transient evolution of the instabilities to characterize their duration. In an additional suite of models, we address the development of EDC during plate motions, thus accounting for the mantle shearing due to spreading. Our results show an increase of strain with the step height as well as with the margin width up to 200 km. After this value the influence of ridge margin can be neglected. Strain rates are, then, quantified for a range of laboratory-constrained constitutive laws for mantle and lithosphere forming minerals. These models propose a viable mechanism to explain the post-rift tectonic inversion observed along the Arabian continental margin and the episodic ultra-fast sea floor spreading in the central Red Sea, where the role of EDC has been invoked.

  4. Plate Kinematic model of the NW Indian Ocean and derived regional stress history of the East African Margin

    NASA Astrophysics Data System (ADS)

    Tuck-Martin, Amy; Adam, Jürgen; Eagles, Graeme

    2015-04-01

    Starting with the break up of Gondwana, the northwest Indian Ocean and its continental margins in Madagascar, East Africa and western India formed by divergence of the African and Indian plates and were shaped by a complicated sequence of plate boundary relocations, ridge propagation events, and the independent movement of the Seychelles microplate. As a result, attempts to reconcile the different plate-tectonic components and processes into a coherent kinematic model have so far been unsatisfactory. A new high-resolution plate kinematic model has been produced in an attempt to solve these problems, using seafloor spreading data and rotation parameters generated by a mixture of visual fitting of magnetic isochron data and iterative joint inversion of magnetic isochron and fracture zone data. Using plate motion vectors and plate boundary geometries derived from this model, the first-order regional stress pattern was modelled for distinct phases of margin formation. The stress pattern is correlated with the tectono-stratigraphic history of related sedimentary basins. The plate kinematic model identifies three phases of spreading, from the Jurassic to the Paleogene, which resulted in the formation of three main oceanic basins. Prior to these phases, intracontinental 'Karoo' rifting episodes in the late Carboniferous to late Triassic had failed to break up Gondwana, but initiated the formation of sedimentary basins along the East African and West Madagascan margins. At the start of the first phase of spreading (183 to 133 Ma) predominantly NW - SE extension caused continental rifting that separated Madagascar/India/Antarctica from Africa. Maximum horizontal stresses trended perpendicular to the local plate-kinematic vector, and parallel to the rift axes. During and after continental break-up and subsequent spreading, the regional stress regime changed drastically. The extensional stress regime became restricted to the active spreading ridges that in turn adopted trends normal to the plate divergence vector. Away from the active ridges, compressional horizontal stresses caused by ridge-push forces were transmitted through the subsiding oceanic lithosphere, with an SH max orientation parallel to plate divergence vectors. These changes are documented by the lower Bajocian continental breakup unconformity, which can be traced throughout East African basins. At 133 Ma, the plate boundary moved from north to south of Madagascar, incorporating it into the African plate and initiating its separation from Antarctica. The orientation of the plate divergence vector however did not change markedly. The second phase (89 - 61 Ma) led to the separation of India from Madagascar, initiating a new and dramatic change in stress orientation from N-S to ENE-WSW. This led to renewed tectonic activity in the sedimentary basins of western Madagascar. In the third phase (61 Ma to present) asymmetric spreading of the Carlsberg Ridge separated India from the Seychelles and the Mascarene Plateau via the southward propagation of the Carlsberg Ridge to form the Central Indian Ridge. The anti-clockwise rotation of the independent Seychelles microplate between chrons 28n (64.13 Ma) and 26n (58.38 Ma) and the opening of the short-lived Laxmi Basin (67 Ma to abandonment within chron 28n (64.13 - 63.10 Ma)) have been further constrained by the new plate kinematic model. Along the East African margin, SH max remained in a NE - SW orientation and the sedimentary basins experienced continued thick, deep water sediment deposition. Contemporaneously, in the sedimentary basins along East African passive margin, ridge-push related maximum horizontal stresses became progressively outweighed by local gravity-driven NE-SW maximum horizontal stresses trending parallel to the margin. These stress regimes are caused by sediment loading and extensional collapse of thick sediment wedges, predominantly controlled by margin geometry. Our study successfully integrates an interpretation of paleo-stress regimes constrained by the new high resolution plate kinematic and basin history to produce a margin scale tectono-stratigraphic framework that highlights the important interplay of plate boundary forces and basin formation events along the East African margin.

  5. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  6. Vertical mercury distributions in the oceans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, G.A.; Fitzgerald, W.F.

    1988-06-01

    The vertical distribution of mercury (Hg) was determined at coastal and open ocean sites in the northwest Atlantic and Pacific Oceans. Reliable and diagnostic Hg distribution were obtained, permitting major processes governing the marine biogeochemistry of Hg to be identified. The northwest Atlantic near Bermuda showed surface water Hg concentrations near 4 pM, a maximum of 10 pM within the main thermocline, and concentrations less than or equal to surface water values below the depth of the maximum. The maximum appears to result from lateral transport of Hg enriched waters from higher latitudes. In the central North Pacific, surface watersmore » (to 940 m) were slightly elevated (1.9 {plus minus} 0.7 pM) compared to deeper waters (1.4 {plus minus} 0.4 pM), but on thermocline Hg maximum was observed. At similar depths, Hg concentrations near Bermuda were elevated compared to the central North Pacific Ocean. The authors hypothesize that the source of this Hg comes from diagenetic reactions in oxic margin sediments, releasing dissolved Hg to overlying water. Geochemical steady-state box modeling arguments predict a relatively short ({approximately}350 years) mean residence time for Hg in the oceans, demonstrating the reactive nature of Hg in seawater and precluding significant involvement in nutrient-type recycling. Mercury's distributional features and reactive nature suggest that interaction of Hg with settling particulate matter and margin sediments play important roles in regulating oceanic Hg concentrations. Oceanic Hg distributions are governed by an external cycling process, in which water column distributions reflect a rapid competition between the magnitude of the input source and the intensity of the (water column) removal process.« less

  7. An application of almost marginal conditional stochastic dominance (AMCSD) on forming efficient portfolios

    NASA Astrophysics Data System (ADS)

    Slamet, Isnandar; Mardiana Putri Carissa, Siska; Pratiwi, Hasih

    2017-10-01

    Investors always seek an efficient portfolio which is a portfolio that has a maximum return on specific risk or minimal risk on specific return. Almost marginal conditional stochastic dominance (AMCSD) criteria can be used to form the efficient portfolio. The aim of this research is to apply the AMCSD criteria to form an efficient portfolio of bank shares listed in the LQ-45. This criteria is used when there are areas that do not meet the criteria of marginal conditional stochastic dominance (MCSD). On the other words, this criteria can be derived from quotient of areas that violate the MCSD criteria with the area that violate and not violate the MCSD criteria. Based on the data bank stocks listed on LQ-45, it can be stated that there are 38 efficient portfolios of 420 portfolios where each portfolio comprises of 4 stocks and 315 efficient portfolios of 1710 portfolios with each of portfolio has 3 stocks.

  8. SU-F-BRD-09: Is It Sufficient to Use Only Low Density Tissue-Margin to Compensate Inter-Fractionation Setup Uncertainties in Lung Treatment?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Chen, T

    2014-06-15

    Purpose: In lung radiation treatment, PTV is formed with a margin around GTV (or CTV/ITV). Although GTV is most likely of water equivalent density, the PTV margin may be formed with the surrounding low-density tissues, which may lead to unreal dosimetric plan. This study is to evaluate whether the concern of dose calculation inside the PTV with only low density margin could be justified in lung treatment. Methods: Three SBRT cases were analyzed. The PTV from the original plan (Plan-O) was created with a 5–10 mm margin outside the ITV to incorporate setup errors and all mobility from 10 respiratorymore » phases. Test plans were generated with the GTV shifted to the PTV edge to simulate the extreme situations with maximum setup uncertainties. Two representative positions as the very posterior-superior (Plan-PS) and anterior-inferior (Plan-AI) edge were considered. The virtual GTV was assigned a density of 1.0 g.cm−3 and surrounding lung, including the PTV margin, was defined as 0.25 g.cm−3. Also, additional plan with a 1mm tissue-margin instead of full lung-margin was created to evaluate whether a composite-margin (Plan-Comp) has a better approximation for dose calculation. All plans were generated on the average CT using Analytical Anisotropic Algorithm with heterogeneity correction on and all planning parameters/monitor unites remained unchanged. DVH analyses were performed for comparisons. Results: Despite the non-static dose distribution, the high-dose region synchronized with tumor positions. This might due to scatter conditions as greater doses were absorbed in the solid-tumor than in the surrounding low-density lungtissue. However, it still showed missing target coverage in general. Certain level of composite-margin might give better approximation for the dosecalculation. Conclusion: Our exploratory results suggest that with the lungmargin only, the planning dose of PTV might overestimate the coverage of the target during treatment. The significance of this overestimation might warrant further investigation.« less

  9. Anisotropic toughness and strength in graphene and its atomistic origin

    NASA Astrophysics Data System (ADS)

    Hossain, M. Zubaer; Ahmed, Tousif; Silverman, Benjamin; Khawaja, M. Shehroz; Calderon, Justice; Rutten, Andrew; Tse, Stanley

    2018-01-01

    This paper presents the implication of crystallographic orientation on toughness and ideal strength in graphene under lattice symmetry-preserving and symmetry-breaking deformations. In symmetry-preserving deformation, both toughness and strength are isotropic, regardless of the chirality of the lattice; whereas, in symmetry-breaking deformation they are strongly anisotropic, even in the presence of vacancy defects. The maximum and minimum of toughness or strength occur for loading along the zigzag direction and the armchair direction, respectively. The anisotropic behavior is governed by a complex interplay among bond-stretching deformation, bond-bending deformation, and the chirality of the lattice. Nevertheless, the condition for crack-nucleation is dictated by the maximum bond-force required for bond rupture, and it is independent of the chiral angle of the lattice or loading direction. At the onset of crack-nucleation a localized nucleation zone is formed, wherein the bonds rupture locally satisfying the maximum bond-force criterion. The nucleation zone acts as the physical origin in triggering the fracture nucleation process, but its presence is undetectable from the macroscopic stress-strain data.

  10. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies.

    PubMed

    Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro

    2014-05-01

    Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  11. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less

  12. Effect of Climate Change on Water Temperature and ...

    EPA Pesticide Factsheets

    There is increasing evidence that our planet is warming and this warming is also resulting in rising sea levels. Estuaries which are located at the interface between land and ocean are impacted by these changes. We used CE-QUAL-W2 water quality model to predict changes in water temperature as a function of increasing air temperatures and rising sea level for the Yaquina Estuary, Oregon (USA). Annual average air temperature in the Yaquina watershed is expected to increase about 0.3 deg C per decade by 2040-2069. An air temperature increase of 3 deg C in the Yaquina watershed is likely to result in estuarine water temperature increasing by 0.7 to 1.6 deg C. Largest water temperature increases are expected in the upper portion of the estuary, while sea level rise may ameliorate some of the warming in the lower portion of the estuary. Smallest changes in water temperature are predicted to occur in the summer, and maximum changes during the winter and spring. Increases in air temperature may result in an increase in the number of days per year that the 7-day maximum average temperature exceeds 18 deg C (criterion for protection of rearing and migration of salmonids and trout) as well as other water quality concerns. In the upstream portion of the estuary, a 4 deg C increase in air temperature is predicted to cause an increase of 40 days not meeting the temperature criterion, while in the lower estuary the increase will depend upon rate of sea level rise (rang

  13. Intrathoracic pressure impulse predicts pulmonary contusion volume in ballistic blunt thoracic trauma.

    PubMed

    Prat, Nicolas; Rongieras, Frédéric; Voiglio, Eric; Magnan, Pascal; Destombe, Casimir; Debord, Eric; Barbillon, Franck; Fusai, Thierry; Sarron, Jean-Claude

    2010-10-01

    Blunt thoracic trauma including behind armour blunt trauma or impact from a less lethal kinetic weapon (LLKW) projectile may cause injuries, including pulmonary contusions that can result in potentially lethal secondary complications. These lung injuries may be caused by intrathoracic pressure waves. The aim of this study was to observe dynamic changes in intrathoracic hydrostatic pressure during ballistic blunt thoracic trauma and to find correlations between these hydrostatic pressure parameters (especially the impulse parameter) and physical damages. Thirty anesthetized pigs sustained a blunt thoracic trauma. In group 1 (n = 20), pigs were protected by a National Institute of Justice class III or IV bulletproof vest and shot with 7.62 NATO bullets. In group 2 (n = 10), pigs were shot by an LLKW. Intrathoracic pressure was recorded with an intraesophageal pressure sensor and three parameters were determined: intrathoracic maximum pressure, intrathoracic maximum pressure impulse (PI(max)), and the Pd.P/dt(max), derived from Viano's viscous criterion. Relative right lower lung lobe contusion volume was also measured. Different thoracic loading conditions were obtained. PI(max) best correlated with relative pulmonary contusion volume (R² = 0.64 and p < 0.0001). This result was homogenous for all experiments and was not related to the type of chest impact (LLKW-induced trauma or behind armour blunt trauma). The PI(max) is a good predictor of pulmonary contusion volume after ballistic blunt thoracic trauma. It is a useful criterion when the kinetic energy record or thoracic wall displacement data are unavailable, and the recording and calculation of this physical value are quite simple on animals.

  14. Topographic aspects of photic driving in the electroencephalogram of children and adolescents.

    PubMed

    Lazarev, V V; Infantosi, A F C; Valencio-de-Campos, D; deAzevedo, L C

    2004-06-01

    The electroencephalogram amplitude spectra at 11 fixed frequencies of intermittent photic stimulation of 3 to 24 Hz were combined into driving "profiles" for 14 scalp points in 8 male and 7 female normal subjects aged 9 to 17 years. The driving response varied over frequency and was detected in 70 to 100% of cases in the occipital areas (maximum) and in 27 to 77% of cases in the frontal areas (minimum) using as a criterion peak amplitude 20% higher than those of the neighbors. Each subject responded, on average, to 9.7 +/- 1.15 intermittent photic stimulation frequencies in the right occipital area and to 6.8 +/- 1.97 frequencies in the right frontal area. Most of the driving responses (in relation to the previous background) were significant according to the spectral F-test (alpha = 0.05), which also detected changes in some cases of low amplitude responses not revealed by the peak criterion. The profiles had two maxima in the alpha and theta bands in all leads. The latter was not present in the background spectra in the posterior areas and was less pronounced in the anterior ones. The weight of the profile theta maximum increased towards the frontal areas where the two maxima were similar, while the profile amplitudes decreased. The profiles repeated the shape of the background spectra, except for the theta band. The interhemispheric correlation between profiles was high. The theta driving detected in all areas recorded suggests a generalized influence of the theta generators in prepubertal and pubertal subjects.

  15. Forming limit prediction by an evolving non-quadratic yield criterion considering the anisotropic hardening and r-value evolution

    NASA Astrophysics Data System (ADS)

    Lian, Junhe; Shen, Fuhui; Liu, Wenqi; Münstermann, Sebastian

    2018-05-01

    The constitutive model development has been driven to a very accurate and fine-resolution description of the material behaviour responding to various environmental variable changes. The evolving features of the anisotropic behaviour during deformation, therefore, has drawn particular attention due to its possible impacts on the sheet metal forming industry. An evolving non-associated Hill48 (enHill48) model was recently proposed and applied to the forming limit prediction by coupling with the modified maximum force criterion. On the one hand, the study showed the significance to include the anisotropic evolution for accurate forming limit prediction. On the other hand, it also illustrated that the enHill48 model introduced an instability region that suddenly decreases the formability. Therefore, in this study, an alternative model that is based on the associated flow rule and provides similar anisotropic predictive capability is extended to chapter the evolving effects and further applied to the forming limit prediction. The final results are compared with experimental data as well as the results by enHill48 model.

  16. Dimensionality of the 9-item Utrecht Work Engagement Scale revisited: A Bayesian structural equation modeling approach.

    PubMed

    Fong, Ted C T; Ho, Rainbow T H

    2015-01-01

    The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.

  17. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    NASA Astrophysics Data System (ADS)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  18. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  19. Ram Pressure Stripping Made Easy: An Analytical Approach

    NASA Astrophysics Data System (ADS)

    Köppen, J.; Jáchym, P.; Taylor, R.; Palouš, J.

    2018-06-01

    The removal of gas by ram pressure stripping of galaxies is treated by a purely kinematic description. The solution has two asymptotic limits: if the duration of the ram pressure pulse exceeds the period of vertical oscillations perpendicular to the galactic plane, the commonly used quasi-static criterion of Gunn & Gott is obtained which uses the maximum ram pressure that the galaxy has experienced along its orbit. For shorter pulses the outcome depends on the time-integrated ram pressure. This parameter pair fully describes the gas mass fraction that is stripped from a given galaxy. This approach closely reproduces results from SPH simulations. We show that typical galaxies follow a very tight relation in this parameter space corresponding to a pressure pulse length of about 300 Myr. Thus, the Gunn & Gott criterion provides a good description for galaxies in larger clusters. Applying the analytic description to a sample of 232 Virgo galaxies from the GoldMine database, we show that the ICM provides indeed the ram pressures needed to explain the deficiencies. We also can distinguish current and past strippers, including objects whose stripping state was unknown.

  20. Scaling effect on the fracture toughness of bone materials using MMTS criterion.

    PubMed

    Akbardoost, Javad; Amirafshari, Reza; Mohsenzade, Omid; Berto, Filippo

    2018-05-21

    The aim of this study is to present a stress based approach for investigating the effect of specimen size on the fracture toughness of bone materials. The proposed approach is a modified form of the classical fracture criterion called maximum tangential stress (MTS). The mechanical properties of bone are different in longitudinal and transverse directions and hence the tangential stress component in the proposed approach should be determined in the orthotropic media. Since only the singular terms of series expansions were obtained in the previous studies, the tangential stress is measured from finite element analysis. In this study, the critical distance is also assumed to be size dependent and a semi-empirical formulation is used for describing the size dependency of the critical distance. By comparing the results predicted by the proposed approach and those reported in the previous studies, it is shown that the proposed approach can predict the fracture resistance of cracked bone by taking into account the effect of specimen size. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  2. Recent developments in analysis of crack propagation and fracture of practical materials. [stress analysis in aircraft structures

    NASA Technical Reports Server (NTRS)

    Hardrath, H. F.; Newman, J. C., Jr.; Elber, W.; Poe, C. C., Jr.

    1978-01-01

    The limitations of linear elastic fracture mechanics in aircraft design and in the study of fatigue crack propagation in aircraft structures are discussed. NASA-Langley research to extend the capabilities of fracture mechanics to predict the maximum load that can be carried by a cracked part and to deal with aircraft design problems are reported. Achievements include: (1) improved stress intensity solutions for laboratory specimens; (2) fracture criterion for practical materials; (3) crack propagation predictions that account for mean stress and high maximum stress effects; (4) crack propagation predictions for variable amplitude loading; and (5) the prediction of crack growth and residual stress in built-up structural assemblies. These capabilities are incorporated into a first generation computerized analysis that allows for damage tolerance and tradeoffs with other disciplines to produce efficient designs that meet current airworthiness requirements.

  3. Perspectives of different type biological life support systems (BLSS) usage in space missions

    NASA Astrophysics Data System (ADS)

    Bartsev, S. I.; Gitelson, J. I.; Lisovsky, G. M.; Mezhevikin, V. V.; Okhonin, V. A.

    1996-10-01

    In the paper an attempt is made to combine three important criteria of LSS comparison: minimum mass, maximum safety and maximum quality of life. Well-known types of BLSS were considered: with higher plant, higher plants and mushrooms, microalgae, and hydrogen-oxidizing bacteria. These BLSSs were compared in terms of "integrated" mass for the case of a vegetarian diet and a "normal" one (with animal proteins and fats). It was shown that the BLSS with higher plants and incineration of wastes becomes the best when the exploitation period is more than 1 yr. The dependence of higher plants' LSS structure on operation time was found. Comparison of BLSSs in terms of integral reliability (this criterion includes mass and quality of life criteria) for a lunar base scenario showed that BLSSs with higher plants are advantageous in reliability and comfort. This comparison was made for achieved level of technology of closing and for perspective one.

  4. Successful resection of a giant mediastinal non-seminomatous germ cell tumor showing fluorodeoxyglucose accumulation after neoadjuvant chemotherapy: report of a case.

    PubMed

    Takada, Kazuki; Morodomi, Yosuke; Okamoto, Tatsuro; Suzuki, Yuzo; Fujishita, Takatoshi; Kitahara, Hirokazu; Shimamatsu, Shinichiro; Kohno, Mikihiro; Kawano, Daigo; Hidaka, Noriko; Nakanishi, Yoichi; Maehara, Yoshihiko

    2014-05-01

    A 32-year-old man presented with a mediastinal non-seminomatous germ cell tumor showing fluorodeoxyglucose (FDG) accumulation (maximum standardized uptake value = 22.21) and extremely elevated blood alpha-fetoprotein (AFP) level (9203.0 ng/ml). The patient underwent 4 cycles of neoadjuvant chemotherapy (cisplatin, bleomycin, and etoposide), which normalized the AFP level and reduced the tumor size, allowing complete resection without a support of extracorporeal circulation. Despite preoperative positron emission tomography revealing increased FDG uptake in the residual tumor (maximum standardized uptake value = 3.59), the pathologic evaluation revealed that no viable germ cell tumor cells remained. We believe FDG uptake should not be used as a criterion for surgical resection after neoadjuvant chemotherapy. It is appropriate to resect the residual tumor regardless of FDG uptake after induction chemotherapy if a tumor is resectable and the AFP level normalizes.

  5. New algorithms for optimal reduction of technical risks

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-06-01

    The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.

  6. Deglaciation-induced uplift and seasonal variations patterns of bedrock displacement in Greenland ice sheet margin observed from GPS, GRACE and InSAR

    NASA Astrophysics Data System (ADS)

    Lu, Q.; Amelung, F.; Wdowinski, S.

    2017-12-01

    The Greenland ice sheet is rapidly shrinking with the fastest retreat and thinning occurring at the ice sheet margin and near the outlet glaciers. The changes of the ice mass cause an elastic response of the bedrock. Theoretically, ice mass loss during the summer melting season is associated with bedrock uplift, whereas increasing ice mass during the winter months is associated with bedrock subsidence. Here we examine the annual changes of the vertical displacements measured at 37 GPS stations and compare the results with Greenland drainage basins' gravity from GRACE. We use both Fourier Series (FS) analysis and Cubic Smoothing Spline (CSS) method to estimate the phases and amplitudes of seasonal variations. Both methods show significant differences seasonal behaviors in southern and northern Greenland. The average amplitude of bedrock displacements (3.29±0.02mm) in south Greenland is about 2 times larger than the north (1.65±0.02mm). The phase of bedrock maximum uplift (November) is considerably consistent with the time of minimum ice mass load in south Greenland (October). However, the phase of bedrock maximum uplift in north Greenland (February) is 4 months later than the minimum ice mass load in north Greenland basins (October). In addition, we present ground deformation near several famous glaciers in Greenland such as Petermann glacier and Jakobshavn glacier. We process InSAR data from TerraSAR-X and Sentinel satellite, based on small baseline interferograms. We observed rapid deglaciation-induced uplift and seasonal variations on naked bedrock near the glacier ice margin.

  7. Qualitative computer aided evaluation of dental impressions in vivo.

    PubMed

    Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H

    2006-01-01

    Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.

  8. A survey of kernel-type estimators for copula and their applications

    NASA Astrophysics Data System (ADS)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  9. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  10. [Esthetic restoration for anterior teeth with the hot pressed porcelain laminate veneers].

    PubMed

    Xu, Shao-ping; Luo, Xiao-ping; Shi, Yu-juan

    2012-10-01

    To evaluate the esthetic effect of anterior porcelain veneers fabricated with the heat pressed glass ceramic. Thirty-two patients, who wanted to receive a aesthetic restorative treatment for 206 anterior teeth were selected. Among them, 20 were for dental fluorosis, 8 were for light tetracycline stained teeth, the other 4 were labial enamel hypoplasia or obvious crack on the surface of enamel. According to the color of adjacent teeth,skin and lips, heat pressed IPS e.max ingots of different color were chosen to mold the restorations. Afterwards, special straining technique was conducted on the marginal ridge and incisor ridge of the veneers after carefully trimmed in the mouth. Restorations were them bonded with Variolink II resin cement. After 7 years of follow-up, a modified USPHS criterion was used to evaluate the esthetic effect. The translucency of veneers was superior. Marginal integrity of the veneers was perfect and it docked well with the marginal terminate line of the abutment. There was no edge coloring after the veneers were used for 7 years, and the veneers produce an excellent chameleon effect by absorbing the color of adjacent teeth and gums, at the same time, veneers could produce a feature of surface morphology of natural enamel after careful carve. In the long-term clinical observation, 5 of the 206 veneers were fractured or fell off. This porcelain laminate veneers fabricated from the heat pressed IPS e.max Press ingots include the following advantages, such as simple operating procedure, high mechanical strength, very little dental tissue was ground off and nice aesthetic effect. Ultra-thin veneers are especially suitable for aesthetic practice to dental fluorosis, light tetracycline and natural worn teeth.

  11. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  12. Lead nitrate induced unallied expression of liver and kidney functions in male albino rats.

    PubMed

    Chougule, Priti; Patil, Bhagyashree; Kanase, Aruna

    2005-06-01

    To determine the effects of lead where lead accumulates maximum (liver followed by kidney), liver and kidney functions were studied using low oral dose of lead nitrate for prolonged duration. Dose of 20 mg lead nitrate/kg body wt/day was used in male albino rats. AST and ALT levels altered independently. When ALT remained unaltered after 7 and 21 days of treatment, it is decreased by 13.21% after 14 days treatment. AST was marginally lowered after 7 days, increased after 14 days and increased marginally after 21 days. Bilirubin (conjugated, unconjugated and total) decreased after 7 and 14 days and increased after 21 days. Urea increase was directly proportional to duration. Creatinine remained unaltered.

  13. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  14. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  15. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm. CRESST Report 834

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2013-01-01

    In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…

  16. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2014-01-01

    In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…

  17. Element Load Data Processor (ELDAP) Users Manual

    NASA Technical Reports Server (NTRS)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  18. JointMMCC: Joint Maximum-Margin Classification and Clustering of Imaging Data

    PubMed Central

    Filipovych, Roman; Resnick, Susan M.; Davatzikos, Christos

    2012-01-01

    A number of conditions are characterized by pathologies that form continuous or nearly-continuous spectra spanning from the absence of pathology to very pronounced pathological changes (e.g., normal aging, Mild Cognitive Impairment, Alzheimer's). Moreover, diseases are often highly heterogeneous with a number of diagnostic subcategories or subconditions lying within the spectra (e.g., Autism Spectrum Disorder, schizophrenia). Discovering coherent subpopulations of subjects within the spectrum of pathological changes may further our understanding of diseases, and potentially identify subconditions that require alternative or modified treatment options. In this paper, we propose an approach that aims at identifying coherent subpopulations with respect to the underlying MRI in the scenario where the condition is heterogeneous and pathological changes form a continuous spectrum. We describe a Joint Maximum-Margin Classification and Clustering (JointMMCC) approach that jointly detects the pathologic population via semi-supervised classification, as well as disentangles heterogeneity of the pathological cohort by solving a clustering subproblem. We propose an efficient solution to the non-convex optimization problem associated with JointMMCC. We apply our proposed approach to an MRI study of aging, and identify coherent subpopulations (i.e., clusters) of cognitively less stable adults. PMID:22328179

  19. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  20. Preclinical evaluation of nuclear morphometry and tissue topology for breast carcinoma detection and margin assessment

    PubMed Central

    Nyirenda, Ndeke; Farkas, Daniel L.

    2010-01-01

    Prevention and early detection of breast cancer are the major prophylactic measures taken to reduce the breast cancer related mortality and morbidity. Clinical management of breast cancer largely relies on the efficacy of the breast-conserving surgeries and the subsequent radiation therapy. A key problem that limits the success of these surgeries is the lack of accurate, real-time knowledge about the positive tumor margins in the surgically excised tumors in the operating room. This leads to tumor recurrence and, hence, the need for repeated surgeries. Current intraoperative techniques such as frozen section pathology or touch imprint cytology severely suffer from poor sampling and non-optimal detection sensitivity. Even though histopathology analysis can provide information on positive tumor margins post-operatively (~2–3 days), this information is of no immediate utility in the operating rooms. In this article, we propose a novel image analysis method for tumor margin assessment based on nuclear morphometry and tissue topology and demonstrate its high sensitivity/specificity in preclinical animal model of breast carcinoma. The method relies on imaging nuclear-specific fluorescence in the excised surgical specimen and on extracting nuclear morphometric parameters (size, number, and area fraction) from the spatial distribution of the observed fluorescence in the tissue. We also report the utility of tissue topology in tumor margin assessment by measuring the fractal dimension in the same set of images. By a systematic analysis of multiple breast tissues specimens, we show here that the proposed method is not only accurate (~97% sensitivity and 96% specificity) in thin sections, but also in three-dimensional (3D) thick tissues that mimic the realistic lumpectomy specimens. Our data clearly precludes the utility of nuclear size as a reliable diagnostic criterion for tumor margin assessment. On the other hand, nuclear area fraction addresses this issue very effectively since it is a combination of both nuclear size and count in any given region of the analyzed image, and thus yields high sensitivity and specificity (~97%) in tumor detection. This is further substantiated by an independent parameter, fractal dimension, based on the tissue topology. Although the basic definition of cancer as an uncontrolled cell growth entails a high nuclear density in tumor regions, a simple but systematic exploration of nuclear distribution in thick tissues by nuclear morphometry and tissue topology as performed in this study has never been carried out, to the best of our knowledge. We discuss the practical aspects of implementing this imaging approach in automated tissue sampling scenario where the accuracy of tumor margin assessment can be significantly increased by scanning the entire surgical specimen rather than sampling only a few sections as in current histopathology analysis. PMID:20446030

  1. Energy Approach-Based Simulation of Structural Materials High-Cycle Fatigue

    NASA Astrophysics Data System (ADS)

    Balayev, A. F.; Korolev, A. V.; Kochetkov, A. V.; Sklyarova, A. I.; Zakharov, O. V.

    2016-02-01

    The paper describes the mechanism of micro-cracks development in solid structural materials based on the theory of brittle fracture. A probability function of material cracks energy distribution is obtained using a probabilistic approach. The paper states energy conditions for cracks growth at material high-cycle loading. A formula allowing to calculate the amount of energy absorbed during the cracks growth is given. The paper proposes a high- cycle fatigue evaluation criterion allowing to determine the maximum permissible number of solid body loading cycles, at which micro-cracks start growing rapidly up to destruction.

  2. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  3. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  4. The One-Meter Criterion for Tsunami Warning: Time for a Reevaluation?

    NASA Astrophysics Data System (ADS)

    Fryer, G. J.; Weinstein, S.

    2013-12-01

    The U.S. tsunami warning centers issue warnings when runup is anticipated to exceed one meter. The origins of the one-meter criterion are unclear, though Whitmore, et al (2008) showed from tsunami history that one meter is roughly the threshold above which damage occurs. Recent experiences in Hawaii, however, suggest that the threshold could be raised. Tsunami Warnings were issued for 2010 Chile, 2011 Tohoku, and 2012 Haida Gwaii tsunamis; each exceeded one meter runup somewhere in the State. Evacuation, however, was necessary only in 2011, and even then onshore damage (as opposed to damage from currents) occurred only where runup exceeded 1.5m. During both Chile and Haida Gwaii tsunamis the existing criteria led to unnecessary evacuation. Maximum runup during the Chile tsunami was 1.1m at Hilo's Wailoa Boat Harbor, while the Haida Gwaii tsunami peaked at 1.2m at Honouliwai Bay on Molokai. Both tsunamis caused only minor damage and minimal flooding; in both cases a Tsunami Advisory (i.e., there is no need to evacuate, but stay off the beach and out of the water) would have been adequate. The Advisory was originally developed as an ad hoc response to the mildly threatening 2006 Kuril tsunami and has since been formalized as the product we issue when maximum runup is expected to be 0.3-1.0 m. At the time it was introduced, however, there was no discussion that this new low-level warning might allow the criterion for Tsunami Warning itself to be adjusted. We now suggest that the divide between Advisory and Warning be raised from 1.0 to something greater, possibly 1.2m. If the warning threshold were raised to 1.2m, the over-warning for the Chile tsunami still could not have been avoided. Models calibrated against DART data consistently forecast runup just over 1.2m for that event. For Haida Gwaii, adjusting the models to match the DART data increased the forecast runup to almost 2m, which again meant a warning, though in retrospect we should have been skeptical. The nearest DART to Haida Gwaii was off the Washington coast in line with the long axis (strike direction) of the rupture and so provided little constraint on the tsunami directed towards Hawaii (the dip direction). The finite fault model obtained by inverting the DART data extended the rupture too far along strike and pushed the rupture to the wrong (east) side of Haida Gwaii, in conflict with the W-phase CMT. The inferred wave height at the Langara Point tide gauge, just outside the epicentral region, was also too large by a factor of two. Forcing the tsunami inversion to be consistent with the CMT would have rendered the inferred rupture much closer to reality, matched the Langara Point record well, and forecast a maximum runup at Kahului of only 1.0 m (the actual runup there was 0.8m). If the warning criterion had been 1.2m the unnecessary coastal evacuation for the Haida Gwaii tsunami could have been avoided. So increasing the warning threshold by only 20 cm would eliminate one of the two recent unnecessary evacuations. Can the threshold be be raised even more? We are considering that possibility, though the uncertainties and time constraints of an actual warning demand that we remain very conservative.

  5. Comparative evaluation of marginal leakage of provisional crowns cemented with different temporary luting cements: In vitro study.

    PubMed

    Arora, Sheen Juneja; Arora, Aman; Upadhyaya, Viram; Jain, Shilpi

    2016-01-01

    As, the longevity of provisional restorations is related to, a perfect adaptation and a strong, long-term union between restoration and teeth structures, therefore, evaluation of marginal leakage of provisional restorative materials luted with cements using the standardized procedures is essential. To compare the marginal leakage of the provisional crowns fabricated from Autopolymerizing acrylic resin crowns and bisphenol A-glycidyl dimethacrylate (BIS-GMA) resin crowns. To compare the marginal leakage of the provisional crowns fabricated from autopolymerizing acrylic resin crowns and BIS-GMA resin crowns cemented with different temporary luting cements. To compare the marginal leakage of the provisional crowns fabricated from autopolymerizing acrylic resin (SC-10) crowns cemented with different temporary luting cements. To compare the marginal leakage of the provisional crowns fabricated from BIS-GMA resin crowns (Protemp 4) cemented with different temporary luting cements. Freshly extracted 60 maxillary premolars of approximately similar dimensions were mounted in dental plaster. Tooth reduction with shoulder margin was planned to use a customized handpiece-holding jig. Provisional crowns were prepared using the wax pattern fabricated from computer aided designing/computer aided manufacturing milling machine following the tooth preparation. Sixty provisional crowns were made, thirty each of SC-10 and Protemp 4 and were then cemented with three different luting cements. Specimens were thermocycled, submerged in a 2% methylene blue solution, then sectioned and observed under a stereomicroscope for the evaluation of marginal microleakage. A five-level scale was used to score dye penetration in the tooth/cement interface and the results of this study was analyzed using the Chi-square test, Mann-Whitney U-test, Kruskal-Wallis H-test and the results were statistically significant P < 0.05 the power of study - 80%. Marginal leakage was significant in both provisional crowns cemented with three different luting cements along the axial walls of teeth (P < 0.05) confidence interval - 95%. The temporary cements with eugenol showed more microleakage than those without eugenol. SC-10 crowns showed more microleakage compared to Protemp 4 crowns. SC-10 crowns cemented with Kalzinol showed maximum microleakage and Protemp 4 crowns cemented with HY bond showed least microleakage.

  6. Organochlorine pesticide residues in ground water of Thiruvallur district, India.

    PubMed

    Jayashree, R; Vasudevan, N

    2007-05-01

    Modern agriculture practices reveal an increase in use of pesticides and fertilizers to meet the food demand of increasing population which results in contamination of the environment. In India crop production increased to 100% but the cropping area has increased marginally by 20%. Pesticides have played a major role in achieving the maximum crop production, but maximum usage and accumulation of pesticide residues was highly detrimental to aquatic and other ecosystem. The present study was chosen to know the level of organochlorines contamination in ground water of Thiruvallur district, Tamil Nadu, India. The samples were highly contaminated with DDT, HCH, endosulfan and their derivatives. Among the HCH derivatives, Gamma HCH residues was found maximum of 9.8 microg/l in Arumbakkam open wells. Concentrations of pp-DDT and op-DDT were 14.3 microg/l and 0.8 microg/l. The maximum residue (15.9 microg/l) of endosulfan sulfate was recorded in Kandigai village bore well. The study showed that the ground water samples were highly contaminated with organochlorine residues.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truong, Pauline T., E-mail: ptruong@bccancer.bc.ca; Breast Cancer Outcomes Unit, British Columbia Cancer Agency, Vancouver Island Centre, University of British Columbia, Victoria, BC; Sadek, Betro T.

    Purpose: To examine locoregional and distant recurrence (LRR and DR) in women with pT1-2N0 breast cancer according to approximated subtype and clinicopathologic characteristics. Methods and Materials: Two independent datasets were pooled and analyzed. The study participants were 1994 patients with pT1-2N0M0 breast cancer, treated with mastectomy without radiation therapy. The patients were classified into 1 of 5 subtypes: luminal A (ER+ or PR+/HER 2−/grade 1-2, n=1202); luminal B (ER+ or PR+/HER 2−/grade 3, n=294); luminal HER 2 (ER+ or PR+/HER 2+, n=221); HER 2 (ER−/PR−/HER 2+, n=105) and triple-negative breast cancer (TNBC) (ER−/PR−/HER 2−, n=172). Results: The median follow-up timemore » was 4.3 years. The 5-year Kaplan-Meier (KM) LRR were 1.8% in luminal A, 3.1% in luminal B, 1.7% in luminal HER 2, 1.9% in HER 2, and 1.9% in TNBC cohorts (P=.81). The 5-year KM DR was highest among women with TNBC: 1.8% in luminal A, 5.0% in luminal B, 2.4% in luminal HER 2, 1.1% in HER 2, and 9.6% in TNBC cohorts (P<.001). Among 172 women with TNBC, the 5-year KM LRR were 1.3% with clear margins versus 12.5% with close or positive margins (P=.04). On multivariable analysis, factors that conferred higher LRR risk were tumors >2 cm, lobular histology, and close/positive surgical margins. Conclusions: The 5-year risk of LRR in our pT1-2N0 cohort treated with mastectomy was generally low, with no significant differences observed between approximated subtypes. Among the subtypes, TNBC conferred the highest risk of DR and an elevated risk of LRR in the presence of positive or close margins. Our data suggest that although subtype alone cannot be used as the sole criterion to offer postmastectomy radiation therapy, it may reasonably be considered in conjunction with other clinicopathologic factors including tumor size, histology, and margin status. Larger cohorts and longer follow-up times are needed to define which women with node-negative disease have high postmastectomy LRR risks in contemporary practice.« less

  8. Biofuels, land, and water: a systems approach to sustainability.

    PubMed

    Gopalakrishnan, Gayathri; Negri, M Cristina; Wang, Michael; Wu, May; Snyder, Seth W; Lafreniere, Lorraine

    2009-08-01

    There is a strong societal need to evaluate and understand the sustainability of biofuels, especially because of the significant increases in production mandated by many countries, including the United States. Sustainability will be a strong factor in the regulatory environment and investments in biofuels. Biomass feedstock production is an important contributor to environmental, social, and economic impacts from biofuels. This study presents a systems approach where the agricultural, energy, and environmental sectors are considered as components of a single system, and environmental liabilities are used as recoverable resources for biomass feedstock production. We focus on efficient use of land and water resources. We conducted a spatial analysis evaluating marginal land and degraded water resources to improve feedstock productivity with concomitant environmental restoration for the state of Nebraska. Results indicate that utilizing marginal land resources such as riparian and roadway buffer strips, brownfield sites, and marginal agricultural land could produce enough feedstocks to meet a maximum of 22% of the energy requirements of the state compared to the current supply of 2%. Degraded water resources such as nitrate-contaminated groundwater and wastewater were evaluated as sources of nutrients and water to improve feedstock productivity. Spatial overlap between degraded water and marginal land resources was found to be as high as 96% and could maintain sustainable feedstock production on marginal lands. Other benefits of implementing this strategy include feedstock intensification to decrease biomass transportation costs, restoration of contaminated water resources, and mitigation of greenhouse gas emissions.

  9. Management of primary and metastasized melanoma in Germany in the time period 1976-2005: an analysis of the Central Malignant Melanoma Registry of the German Dermatological Society.

    PubMed

    Schwager, Silke S; Leiter, Ulrike; Buettner, Petra G; Voit, Christiane; Marsch, Wolfgang; Gutzmer, Ralf; Näher, Helmut; Gollnick, Harald; Bröcker, Eva Bettina; Garbe, Claus

    2008-04-01

    This study analysed the changes of excision margins in correlation with tumour thickness as recorded over the last three decades in Germany. The study also evaluated surgical management in different geographical regions and treatment options for metastasized melanoma. A total of 42 625 patients with invasive primary cutaneous melanoma, recorded by the German Central Malignant Melanoma Registry between 1976 and 2005 were included. Multiple linear regression analysis was used to investigate time trends of excision margins adjusted for tumour thickness. Excision margins of 5.0 cm were widely used in the late 1970s but since then have been replaced by smaller margins that are dependent on tumour thickness. In the case of primary melanoma, one-step surgery dominated until 1985 and was mostly replaced by two-step excisions since the early 1990s. In eastern Germany, one-step management remained common until the late 1990s. During the last three decades loco-regional metastases were predominantly treated by surgery (up to 80%), whereas systemic therapy decreased. The primary treatment of distant metastases has consistently been systemic chemotherapy. This descriptive retrospective study revealed a significant decrease in excision margins to a maximum of 2.00 cm. A significant trend towards two-step excisions in primary cutaneous melanoma was observed throughout Germany. Management of metastasized melanoma showed a tendency towards surgical procedures in limited disease and an ongoing trend to systemic treatment in advanced disease.

  10. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. ORNL Evaluation of Electrabel Safety Cases for Doel 3 / Tihange 2: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, Bennett Richard; Dickson, Terry L.; Gorti, Sarma B.

    Oak Ridge National Laboratory (ORNL) performed a detailed technical review of the 2015 Electrabel (EBL) Safety Cases prepared for the Belgium reactor pressure vessels (RPVs) at Doel 3 and Tihange 2 (D3/T2). The Federal Agency for Nuclear Control (FANC) in Belgium commissioned ORNL to provide a thorough assessment of the existing safety margins against cracking of the RPVs due to the presence of almost laminar flaws found in each RPV. Initial efforts focused on surveying relevant literature that provided necessary background knowledge on the issues related to the quasilaminar flaws observed in D3/T2 reactors. Next, ORNL proceeded to develop anmore » independent quantitative assessment of the entire flaw population in the two Belgian reactors according to the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI, Appendix G, Fracture Toughness Criteria for Protection Against Failure, New York (1992 and 2004). That screening assessment of all EBL-characterized flaws in D3/T2 used ORNL tools, methodologies, and the ASME Code Case N-848, Alternative Characterization Rules for QuasiLaminar Flaws . Results and conclusions from the ORNL flaw acceptance assessments of D3/T2 were compared with those from the 2015 EBL Safety Cases. Specific findings of the ORNL evaluation of that part of the EBL structural integrity assessment focusing on stability of the flaw population subjected to primary design transients include the following: ORNL s analysis results were similar to those of EBL in that very few characterized flaws were found not compliant with the ASME (1992) acceptance criterion. ORNL s application of the more recent ASME Section XI (2004) produced only four noncompliant flaws, all due to LOCAs. The finding of a greater number of non-compliant flaws in the EBL screening assessment is due principally to a significantly more restrictive (conservative) criterion for flaw size acceptance used by EBL. ORNL s screening assessment results (obtained using an analysis methodology different from that of EBL) are interpreted herein as confirming the EBL screening results for D3/T2. ORNL s independent refined analysis demonstrated the EBL-characterized flaw 1660, which is non-compliant in the ORNL and EBL screening assessment, is rendered compliant when modeled as a more realistic individual quasi-laminar flaw using a 3-D XFEM analysis approach. ORNL s and EBL s refined analyses are in good agreement for the flaw 1660 close to the clad/base metal interface; ORNL is not persuaded that repeating this exercise for more than one non-compliant flaw is necessary to accept the EBL conclusions derived from the aggregate of EBL refined analysis results. ORNL General Conclusions Regarding the Structural Integrity Assessment (SIA) Conducted by EBL for D3/T2 Based on comparative evaluations of ORNL and EBL SIA analyses and on consideration of other results, ORNL is in agreement with the general conclusions reported by Electrabel in their RPV D3/T2 Technical Summary Note of April 14, 2015: More than 99 percent of flaws in D3/T2 meet the defined screening criterion, rendering them benign with respect to initiation in the event of a design transient. Refined analyses of non-compliant flaws from the screening assessment indicate that only 11 of the 16196 detected flaws have a critical reference-temperature material index (designated RTNDT) that implies the possibility of the initiation of cleavage fracture at some future time. For those 11 2 flaws, the calculated margin in RTNDT (a measure of acceptable embrittlement relative to end-ofservice-life conditions) is significant, being greater than 80 C. Fatigue crack growth is not a concern in the flaw-acceptability analyses. Primary stress re-evaluation confirms that the collapse pressure is more than 1.5 times the design pressure in the presence of defects detected in D3/T2. Sufficient conservatisms are built into the input data and into the different steps of the SIA; in some cases, those conservatisms are quantified and imply that additional margins exist in the SIA. Taken as a whole, the foregoing results and conclusions confirm the structural integrity of Doel 3 and Tihange 2 under all design transients with ample margin in the presence of the 16196 detected flaws.« less

  12. Development of CO2 laser Doppler instrumentation for detection of clear air turbulence, volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Harris, C. E.; Jelalian, A. V.

    1979-01-01

    Analyses of the mounting and mount support systems of the clear air turbulence transmitters verify that satisfactory shock and vibration isolation are attained. The mount support structure conforms to flight crash safety requirements with high margins of safety. Restraint cables reinforce the mounts in the critical loaded forward direction limiting maximum forward system deflection to 1 1/4 inches.

  13. A re-appraisal of the stratigraphy and volcanology of the Cerro Galán volcanic system, NW Argentina

    USGS Publications Warehouse

    Folkes, Christopher B.; Wright, Heather M.; Cas, Ray A.F.; de Silva, Shanaka L.; Lesti, Chiara; Viramonte, Jose G.

    2011-01-01

    From detailed fieldwork and biotite 40Ar/39Ar dating correlated with paleomagnetic analyses of lithic clasts, we present a revision of the stratigraphy, areal extent and volume estimates of ignimbrites in the Cerro Galán volcanic complex. We find evidence for nine distinct outflow ignimbrites, including two newly identified ignimbrites in the Toconquis Group (the Pitas and Vega Ignimbrites). Toconquis Group Ignimbrites (~5.60–4.51 Ma biotite ages) have been discovered to the southwest and north of the caldera, increasing their spatial extents from previous estimates. Previously thought to be contemporaneous, we distinguish the Real Grande Ignimbrite (4.68 ± 0.07 Ma biotite age) from the Cueva Negra Ignimbrite (3.77 ± 0.08 Ma biotite age). The form and collapse processes of the Cerro Galán caldera are also reassessed. Based on re-interpretation of the margins of the caldera, we find evidence for a fault-bounded trapdoor collapse hinged along a regional N-S fault on the eastern side of the caldera and accommodated on a N-S fault on the western caldera margin. The collapsed area defines a roughly isosceles trapezoid shape elongated E-W and with maximum dimensions 27 × 16 km. The Cerro Galán Ignimbrite (CGI; 2.08 ± 0.02 Ma sanidine age) outflow sheet extends to 40 km in all directions from the inferred structural margins, with a maximum runout distance of ~80 km to the north of the caldera. New deposit volume estimates confirm an increase in eruptive volume through time, wherein the Toconquis Group Ignimbrites increase in volume from the ~10 km3 Lower Merihuaca Ignimbrite to a maximum of ~390 km3 (Dense Rock Equivalent; DRE) with the Real Grande Ignimbrite. The climactic CGI has a revised volume of ~630 km3 (DRE), approximately two thirds of the commonly quoted value.

  14. Essays in the California electricity reserves markets

    NASA Astrophysics Data System (ADS)

    Metaxoglou, Konstantinos

    This dissertation examines inefficiencies in the California electricity reserves markets. In Chapter 1, I use the information released during the investigation of the state's electricity crisis of 2000 and 2001 by the Federal Energy Regulatory Commission to diagnose allocative inefficiencies. Building upon the work of Wolak (2000), I calculate a lower bound for the sellers' price-cost margins using the inverse elasticities of their residual demand curves. The downward bias in my estimates stems from the fact that I don't account for the hierarchical substitutability of the reserve types. The margins averaged at least 20 percent for the two highest quality types of reserves, regulation and spinning, generating millions of dollars in transfers to a handful of sellers. I provide evidence that the deviations from marginal cost pricing were due to the markets' high concentration and a principal-agent relationship that emerged from their design. In Chapter 2, I document systematic differences between the markets' day- and hour-ahead prices. I use a high-dimensional vector moving average model to estimate the premia and conduct correct inferences. To obtain exact maximum likelihood estimates of the model, I employ the EM algorithm that I develop in Chapter 3. I uncover significant day-ahead premia, which I attribute to market design characteristics too. On the demand side, the market design established a principal-agent relationship between the markets' buyers (principal) and their supervisory authority (agent). The agent had very limited incentives to shift reserve purchases to the lower priced hour-ahead markets. On the supply side, the market design raised substantial entry barriers by precluding purely speculative trading and by introducing a complicated code of conduct that induced uncertainty about which actions were subject to regulatory scrutiny. In Chapter 3, I introduce a state-space representation for vector autoregressive moving average models that enables exact maximum likelihood estimation using the EM algorithm. Moreover, my algorithm uses only analytical expressions; it requires the Kalman filter and a fixed-interval smoother in the E step and least squares-type regression in the M step. In contrast, existing maximum likelihood estimation methods require numerical differentiation, both for univariate and multivariate models.

  15. Biophysics, environmental stochasticity, and the evolution of thermal safety margins in intertidal limpets.

    PubMed

    Denny, M W; Dowd, W W

    2012-03-15

    As the air temperature of the Earth rises, ecological relationships within a community might shift, in part due to differences in the thermal physiology of species. Prediction of these shifts - an urgent task for ecologists - will be complicated if thermal tolerance itself can rapidly evolve. Here, we employ a mechanistic approach to predict the potential for rapid evolution of thermal tolerance in the intertidal limpet Lottia gigantea. Using biophysical principles to predict body temperature as a function of the state of the environment, and an environmental bootstrap procedure to predict how the environment fluctuates through time, we create hypothetical time-series of limpet body temperatures, which are in turn used as a test platform for a mechanistic evolutionary model of thermal tolerance. Our simulations suggest that environmentally driven stochastic variation of L. gigantea body temperature results in rapid evolution of a substantial 'safety margin': the average lethal limit is 5-7°C above the average annual maximum temperature. This predicted safety margin approximately matches that found in nature, and once established is sufficient, in our simulations, to allow some limpet populations to survive a drastic, century-long increase in air temperature. By contrast, in the absence of environmental stochasticity, the safety margin is dramatically reduced. We suggest that the risk of exceeding the safety margin, rather than the absolute value of the safety margin, plays an underappreciated role in the evolution of thermal tolerance. Our predictions are based on a simple, hypothetical, allelic model that connects genetics to thermal physiology. To move beyond this simple model - and thereby potentially to predict differential evolution among populations and among species - will require significant advances in our ability to translate the details of thermal histories into physiological and population-genetic consequences.

  16. The role of tectonic inheritance in the morphostructural evolution of the Galicia continental margin and adjacent abyssal plains from digital bathymetric model (DBM) analysis (NW Spain)

    NASA Astrophysics Data System (ADS)

    Maestro, A.; Jané, G.; Llave, E.; López-Martínez, J.; Bohoyo, F.; Druet, M.

    2018-06-01

    The identification of recent major tectonic structures in the Galicia continental margin and adjacent abyssal plains was carried out by means of a quantitative analysis of the linear structures having bathymetric expression on the seabed. It was possible to identify about 5800 lineaments throughout the entire study area, of approximately 271,500 km2. Most lineaments are located in the Charcot and Coruña highs, in the western sector of the Galicia Bank, in the area of the Marginal Platforms and in the northern sector of the margin. Analysis of the lineament orientations shows a predominant NE-SW direction and three relative maximum directions: NW-SE, E-W and N-S. The total length of the lineaments identified is over 44,000 km, with a mode around 5000 m and an average length of about 7800 m. In light of different tectonic studies undertaken in the northwestern margin of the Iberian Peninsula, we establish that the lineaments obtained from analysis of the digital bathymetric model of the Galicia continental margin and adjacent abyssal plains would correspond to fracture systems. In general, the orientation of lineaments corresponds to main faults, tectonic structures following the directions of ancient faults that resulted from late stages of the Variscan orogeny and Mesozoic extension phases related to Triassic rifting and Upper Jurassic to Early Cretaceous opening of the North Atlantic Ocean. The N-S convergence between Eurasian and African plates since Palaeogene times until the Miocene, and NW-SE convergence from Neogene to present, reactivated the Variscan and Mesozoic fault systems and related physiography.

  17. Inter- and Intrafraction Uncertainty in Prostate Bed Image-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kitty; Palma, David A.; Department of Oncology, University of Western Ontario, London

    2012-10-01

    Purpose: The goals of this study were to measure inter- and intrafraction setup error and prostate bed motion (PBM) in patients undergoing post-prostatectomy image-guided radiotherapy (IGRT) and to propose appropriate population-based three-dimensional clinical target volume to planning target volume (CTV-PTV) margins in both non-IGRT and IGRT scenarios. Methods and Materials: In this prospective study, 14 patients underwent adjuvant or salvage radiotherapy to the prostate bed under image guidance using linac-based kilovoltage cone-beam CT (kV-CBCT). Inter- and intrafraction uncertainty/motion was assessed by offline analysis of three consecutive daily kV-CBCT images of each patient: (1) after initial setup to skin marks, (2)more » after correction for positional error/immediately before radiation treatment, and (3) immediately after treatment. Results: The magnitude of interfraction PBM was 2.1 mm, and intrafraction PBM was 0.4 mm. The maximum inter- and intrafraction prostate bed motion was primarily in the anterior-posterior direction. Margins of at least 3-5 mm with IGRT and 4-7 mm without IGRT (aligning to skin marks) will ensure 95% of the prescribed dose to the clinical target volume in 90% of patients. Conclusions: PBM is a predominant source of intrafraction error compared with setup error and has implications for appropriate PTV margins. Based on inter- and estimated intrafraction motion of the prostate bed using pre- and post-kV-CBCT images, CBCT IGRT to correct for day-to-day variances can potentially reduce CTV-PTV margins by 1-2 mm. CTV-PTV margins for prostate bed treatment in the IGRT and non-IGRT scenarios are proposed; however, in cases with more uncertainty of target delineation and image guidance accuracy, larger margins are recommended.« less

  18. Assessment of Marginal Adaptation and Sealing Ability of Root Canal Sealers: An in vitro Study.

    PubMed

    Remy, Vimal; Krishnan, Vineesh; Job, Tisson V; Ravisankar, Madhavankutty S; Raj, C V Renjith; John, Seena

    2017-12-01

    This study aims to compare the marginal adaptation and sealing ability [mineral trioxide aggregate (MTA)-Fillapex, AH Plus, Endofill sealers] of root canal sealers. In the present study, the inclusion criteria include 45 single-rooted extracted mandibular premolar teeth, with single canal and complete root formation. The sectioning of the samples was done at the cementoenamel junction using a low-speed diamond disc. Step-back technique was used to prepare root canals manually. The MTA-Fillapex, AH Plus, and Endofill sealers were the three experimental sealer groups to which 45 teeth were distributed. Under scanning electron microscope (SEM), marginal gap at sealer and root dentin interface were examined at coronal and apical halves of root canal. Among the three maximum marginal adaptations were seen with AH Plus sealer (4.10 ± 0.10) which is followed by Endofill sealer (1.44 ± 0.18) and MTA-Fillapex sealer (0.80 ± 0.22). Between the coronal and apical marginal adaptation, significant statistical difference (p = 0.001) was seen in AH Plus sealer. When a Mann-Whitney U-test was done on MTA-Fillapex sealer vs AH Plus sealer and AH Plus sealer vs Endofill sealer, there was a statistically significant difference (p < 0.05) found between the above two groups at coronal and apical third. The present study proves that AH Plus sealer has a better marginal adaptation when compared with other sealers used. For sealing space of crown wall and main cone in root canal treatment, sealers play an important role. The other advantages of sealers are that they are used to fill voids and irregularities in root channel, secondary, lateral channels, and space between applied gutta-percha cones and also act as tripper during filling.

  19. Detecting coached neuropsychological dysfunction: a simulation experiment regarding mild traumatic brain injury.

    PubMed

    Lau, Lily; Basso, Michael R; Estevis, Eduardo; Miller, Ashley; Whiteside, Douglas M; Combs, Dennis; Arentsen, Timothy J

    2017-11-01

    Performance validity tests (PVTs) and symptom validity tests (SVTs) are often administered during neuropsychological evaluations. Examinees may be coached to avoid detection by measures of response validity. Relatively little research has evaluated whether graduated levels of coaching has differential effects upon PVT and SVT performance. Accordingly, the present experiment evaluated the effect of graduated levels of coaching upon the classification accuracy of commonly used PVTs and SVTs and the currently accepted criterion of failing two or more PVTs or SVTs. Participants simulated symptoms associated with mild traumatic brain injury (TBI). One group was provided superficial information concerning cognitive, emotional, and physical symptoms. Another group was provided detailed information about such symptoms. A third group was provided detailed information about symptoms and guidance how to evade detection by PVTs. These groups were compared to an honest-responding group. Extending prior experiments, stand-alone and embedded PVT measures were administered in addition to SVTs. The three simulator groups were readily identified by PVTs and SVTs, but a meaningful minority of those provided test-taking strategies eluded detection. The Word Memory Test emerged as the most sensitive indicator of simulated mild TBI symptoms. PVTs achieved more sensitive detection of simulated head injury status than SVTs. Individuals coached to modify test-taking performance were marginally more successful in eluding detection by PVTs and SVTs than those coached with respect to TBI symptoms only. When the criterion of failing two or more PVTs or SVTs was applied, only 5% eluded detection.

  20. A new approach using coagulation rate constant for evaluation of turbidity removal

    NASA Astrophysics Data System (ADS)

    Al-Sameraiy, Mukheled

    2017-06-01

    Coagulation-flocculation-sedimentation processes for treating three levels of bentonite synthetic turbid water using date seeds (DS) and alum (A) coagulants were investigated in the previous research work. In the current research, the same experimental results were used to adopt a new approach on a basis of using coagulation rate constant as an investigating parameter to identify optimum doses of these coagulants. Moreover, the performance of these coagulants to meet (WHO) turbidity standard was assessed by introducing a new evaluating criterion in terms of critical coagulation rate constant (kc). Coagulation rate constants (k2) were mathematically calculated in second order form of coagulation process for each coagulant. The maximum (k2) values corresponded to doses, which were obviously to be considered as optimum doses. The proposed criterion to assess the performance of coagulation process of these coagulants was based on the mathematical representation of (WHO) turbidity guidelines in second order form of coagulation process stated that (k2) for each coagulant should be ≥ (kc) for each level of synthetic turbid water. For all tested turbid water, DS coagulant could not satisfy it. While, A coagulant could satisfy it. The results obtained in the present research are exactly in agreement with the previous published results in terms of finding optimum doses for each coagulant and assessing their performances. On the whole, it is recommended considering coagulation rate constant to be a new approach as an indicator for investigating optimum doses and critical coagulation rate constant to be a new evaluating criterion to assess coagulants' performance.

  1. A coupled ice-ocean model of ice breakup and banding in the marginal ice zone

    NASA Technical Reports Server (NTRS)

    Smedstad, O. M.; Roed, L. P.

    1985-01-01

    A coupled ice-ocean numerical model for the marginal ice zone is considered. The model consists of a nonlinear sea ice model and a two-layer (reduced gravity) ocean model. The dependence of the upwelling response on wind stress direction is discussed. The results confirm earlier analytical work. It is shown that there exist directions for which there is no upwelling, while other directions give maximum upwelling in terms of the volume of uplifted water. The ice and ocean is coupled directly through the stress at the ice-ocean interface. An interesting consequence of the coupling is found in cases when the ice edge is almost stationary. In these cases the ice tends to break up a few tenths of kilometers inside of the ice edge.

  2. Dose calculations accounting for breathing motion in stereotactic lung radiotherapy based on 4D-CT and the internal target volume.

    PubMed

    Admiraal, Marjan A; Schuring, Danny; Hurkmans, Coen W

    2008-01-01

    The purpose of this study was to determine the 4D accumulated dose delivered to the CTV in stereotactic radiotherapy of lung tumours, for treatments planned on an average CT using an ITV derived from the Maximum Intensity Projection (MIP) CT. For 10 stage I lung cancer patients, treatment plans were generated based on 4D-CT images. From the 4D-CT scan, 10 time-sorted breathing phases were derived, along with the average CT and the MIP. The ITV with a margin of 0mm was used as a PTV to study a worst case scenario in which the differences between 3D planning and 4D dose accumulation will be largest. Dose calculations were performed on the average CT. Dose prescription was 60Gy to 95% of the PTV, and at least 54Gy should be received by 99% of the PTV. Plans were generated using the inverse planning module of the Pinnacle(3) treatment planning system. The plans consisted of nine coplanar beams with two segments each. After optimisation, the treatment plan was transferred to all breathing phases and the delivered dose per phase was calculated using an elastic body spline model available in our research version of Pinnacle (8.1r). Then, the cumulative dose to the CTV over all breathing phases was calculated and compared to the dose distribution of the original treatment plan. Although location, tumour size and breathing-induced tumour movement varied widely between patients, the PTV planning criteria could always be achieved without compromising organs at risk criteria. After 4D dose calculations, only very small differences between the initial planned PTV coverage and resulting CTV coverage were observed. For all patients, the dose delivered to 99% of the CTV exceeded 54Gy. For nine out of 10 patients also the criterion was met that the volume of the CTV receiving at least the prescribed dose was more than 95%. When the target dose is prescribed to the ITV (PTV=ITV) and dose calculations are performed on the average CT, the cumulative CTV dose compares well to the planned dose to the ITV. Thus, the concept of treatment plan optimisation and evaluation based on the average CT and the ITV is a valid approach in stereotactic lung treatment. Even with a zero ITV to PTV margin, no significantly different dose coverage of the CTV arises from the breathing motion induced dose variation over time.

  3. Patterns-of-failure guided biological target volume definition for head and neck cancer patients: FDG-PET and dosimetric analysis of dose escalation candidate subregions.

    PubMed

    Mohamed, Abdallah S R; Cardenas, Carlos E; Garden, Adam S; Awan, Musaddiq J; Rock, Crosby D; Westergaard, Sarah A; Brandon Gunn, G; Belal, Abdelaziz M; El-Gowily, Ahmed G; Lai, Stephen Y; Rosenthal, David I; Fuller, Clifton D; Aristophanous, Michalis

    2017-08-01

    To identify the radio-resistant subvolumes in pretreatment FDG-PET by mapping the spatial location of the origin of tumor recurrence after IMRT for head-and-neck squamous cell cancer to the pretreatment FDG-PET/CT. Patients with local/regional recurrence after IMRT with available FDG-PET/CT and post-failure CT were included. For each patient, both pre-therapy PET/CT and recurrence CT were co-registered with the planning CT (pCT). A 4-mm radius was added to the centroid of mapped recurrence growth target volumes (rGTV's) to create recurrence nidus-volumes (NVs). The overlap between boost-tumor-volumes (BTV) representing different SUV thresholds/margins combinations and NVs was measured. Forty-seven patients were eligible. Forty-two (89.4%) had type A central high dose failure. Twenty-six (48%) of type A rGTVs were at the primary site and 28 (52%) were at the nodal site. The mean dose of type A rGTVs was 71Gy. BTV consisting of 50% of the maximum SUV plus 10mm margin was the best subvolume for dose boosting due to high coverage of primary site NVs (92.3%), low average relative volume to CTV1 (41%), and least average percent voxels outside CTV1 (19%). The majority of loco-regional recurrences originate in the regions of central-high-dose. When correlated with pretreatment FDG-PET, the majority of recurrences originated in an area that would be covered by additional 10mm margin on the volume of 50% of the maximum FDG uptake. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fury, Matthew G.; Department of Medicine, Weill Cornell Medical College, New York, New York; Lee, Nancy Y.

    Purpose: Elevated expression of eukaryotic protein synthesis initiation factor 4E (eIF4E) in histologically cancer-free margins of resected head and neck squamous cell carcinomas (HNSCCs) is mediated by mammalian target of rapamycin complex 1 (mTORC1) and has been associated with increased risk of disease recurrence. Preclinically, inhibition of mTORC1 with everolimus sensitizes cancer cells to cisplatin and radiation. Methods and Materials: This was single-institution phase 1 study to establish the maximum tolerated dose of daily everolimus given with fixed dose cisplatin (30 mg/m{sup 2} weekly × 6) and concurrent intensity modulated radiation therapy for patients with locally and/or regionally advanced head-and-neckmore » cancer. The study had a standard 3 + 3 dose-escalation design. Results: Tumor primary sites were oral cavity (4), salivary gland (4), oropharynx (2), nasopharynx (1), scalp (1), and neck node with occult primary (1). In 4 of 4 cases in which resected HNSCC surgical pathology specimens were available for immunohistochemistry, elevated expression of eIF4E was observed in the cancer-free margins. The most common grade ≥3 treatment-related adverse event was lymphopenia (92%), and dose-limiting toxicities (DLTs) were mucositis (n=2) and failure to thrive (n=1). With a median follow up of 19.4 months, 2 patients have experienced recurrent disease. The maximum tolerated dose was everolimus 5 mg/day. Conclusions: Head-and-neck cancer patients tolerated everolimus at therapeutic doses (5 mg/day) given with weekly cisplatin and intensity modulated radiation therapy. The regimen merits further evaluation, especially among patients who are status post resection of HNSCCs that harbor mTORC1-mediated activation of eIF4E in histologically negative surgical margins.« less

  5. Phased occupation and retreat of the last British-Irish Ice Sheet in the southern North Sea; geomorphic and seismostratigraphic evidence of a dynamic ice lobe

    NASA Astrophysics Data System (ADS)

    Dove, Dayton; Evans, David J. A.; Lee, Jonathan R.; Roberts, David H.; Tappin, David R.; Mellett, Claire L.; Long, David; Callard, S. Louise

    2017-05-01

    Along the terrestrial margin of the southern North Sea, previous studies of the MIS 2 glaciation impacting eastern Britain have played a significant role in the development of principles relating to ice sheet dynamics (e.g. deformable beds), and the practice of reconstructing the style, timing, and spatial configuration of palaeo-ice sheets. These detailed terrestrially-based findings have however relied on observations made from only the outer edges of the former ice mass, as the North Sea Lobe (NSL) of the British-Irish Ice Sheet (BIIS) occupied an area that is now almost entirely submarine (c.21-15 ka). Compounded by the fact that marine-acquired data have been primarily of insufficient quality and density, the configuration and behaviour of the last BIIS in the southern North Sea remains surprisingly poorly constrained. This paper presents analysis of a new, integrated set of extensive seabed geomorphological and seismo-stratigraphic observations that both advances the principles developed previously onshore (e.g. multiple advance and retreat cycles), and provides a more detailed and accurate reconstruction of the BIIS at its southern-most extent in the North Sea. A new bathymetry compilation of the region reveals a series of broad sedimentary wedges and associated moraines that represent several terminal positions of the NSL. These former still-stand ice margins (1-4) are also found to relate to newly-identified architectural patterns (shallow stacked sedimentary wedges) in the region's seismic stratigraphy (previously mapped singularly as the Bolders Bank Formation). With ground-truthing constraint provided by sediment cores, these wedges are interpreted as sub-marginal till wedges, formed by complex subglacial accretionary processes that resulted in till thickening towards the former ice-sheet margins. The newly sub-divided shallow seismic stratigraphy (at least five units) also provides an indication of the relative event chronology of the NSL. While there is a general record of south-to-north retreat, seismic data also indicate episodes of ice-sheet re-advance suggestive of an oscillating margin (e.g. MIS 2 maximum not related to first incursion of ice into region). Demonstrating further landform interdependence, geographically-grouped sets of tunnel valleys are shown to be genetically related to these individual ice margins, providing clear insight into how meltwater drainage was organised at the evolving termini of this dynamic ice lobe. The newly reconstructed offshore ice margins are found to be well correlated with previously observed terrestrial limits in Lincolnshire and E. Yorkshire (Holderness) (e.g. MIS 2 maximum and Withernsea Till). This reconstruction will hopefully provide a useful framework for studies targeting the climatic, mass-balance, and external glaciological factors (i.e. Fennoscandian Ice Sheet) that influenced late-stage advance and deglaciation, important for accurately characterising both modern and palaeo-ice sheets.

  6. Influence of Material Selection on the Marginal Accuracy of CAD/CAM-Fabricated Metal- and All-Ceramic Single Crown Copings

    PubMed Central

    Schneider, Lea; Rinke, Sven

    2018-01-01

    This study evaluated the marginal accuracy of CAD/CAM-fabricated crown copings from four different materials within the same processing route. Twenty stone replicas of a metallic master die (prepared upper premolar) were scanned and divided into two groups. Group 1 (n = 10) was used for a pilot test to determine the design parameters for best marginal accuracy. Group 2 (n = 10) was used to fabricate 10 specimens from the following materials with one identical CAD/CAM system (GAMMA 202, Wissner GmbH, Goettingen, Germany): A = commercially pure (cp) titanium, B = cobalt-chromium alloy, C = yttria-stabilized zirconia (YSZ), and D = leucite-reinforced glass-ceramics. Copings from group 2 were evaluated for the mean marginal gap size (MeanMG) and average maximum marginal gap size (AMaxMG) with a light microscope in the “as-machined” state. The effect of the material on the marginal accuracy was analyzed by multiple pairwise comparisons (Mann–Whitney, U-test, α = 0.05, adjusted by Bonferroni-Holmes method). MeanMG values were as follows: A: 46.92 ± 23.12 μm, B: 48.37 ± 29.72 μm, C: 68.25 ± 28.54 μm, and D: 58.73 ± 21.15 μm. The differences in the MeanMG values proved to be significant for groups A/C (p = 0.0024), A/D (p = 0.008), and B/C (p = 0.0332). AMaxMG values (A: 91.54 ± 23.39 μm, B: 96.86 ± 24.19 μm, C: 120.66 ± 32.75 μm, and D: 100.22 ± 10.83 μm) revealed no significant differences. The material had a significant impact on the marginal accuracy of CAD/CAM-fabricated copings. PMID:29765979

  7. Langevin approach to a chemical wave front: Selection of the propagation velocity in the presence of internal noise

    NASA Astrophysics Data System (ADS)

    Lemarchand, A.; Lesne, A.; Mareschal, M.

    1995-05-01

    The reaction-diffusion equation associated with the Fisher chemical model A+B-->2A admits wave-front solutions by replacing an unstable stationary state with a stable one. The deterministic analysis concludes that their propagation velocity is not prescribed by the dynamics. For a large class of initial conditions the velocity which is spontaneously selected is equal to the minimum allowed velocity vmin, as predicted by the marginal stability criterion. In order to test the relevance of this deterministic description we investigate the macroscopic consequences, on the velocity and the width of the front, of the intrinsic stochasticity due to the underlying microscopic dynamics. We solve numerically the Langevin equations, deduced analytically from the master equation within a system size expansion procedure. We show that the mean profile associated with the stochastic solution propagates faster than the deterministic solution at a velocity up to 25% greater than vmin.

  8. Scoring and setting pass/fail standards for an essay certification examination in nurse-midwifery.

    PubMed

    Fullerton, J T; Greener, D L; Gross, L J

    1992-03-01

    Examination for certification or licensure of health professionals (credentialing) in the United States is almost exclusively of the multiple choice format. The certification examination for entry into the practice of the profession of nurse-midwifery has, however, used a modified essay format throughout its twenty-year history. The examination has recently undergone a revision in the method for score interpretation and for pass/fail decision-making. The revised method, described in this paper, has important implications for all health professional credentialing agencies which use modified essay, oral or practical methods of competency assessment. This paper describes criterion-referenced scoring, the process of constructing the essay items, the methods for assuring validity and reliability for the examination, and the manner of standard setting. In addition, two alternative methods for increasing the validity of the pass/fail decision are evaluated, and the rationale for decision-making about marginal candidates is described.

  9. Effects of positive impression management on the NEO Personality Inventory--Revised in a clinical population.

    PubMed

    Ballenger, J F; Caldwell-Andrews, A; Baer, R A

    2001-06-01

    Sixty adults in outpatient psychotherapy completed the NEO Personality Inventory--Revised (NEO PI-R, P. T. Costa & R. R. McCrae, 1992a). Half were instructed to fake good and half were given standard instructions. All completed the Interpersonal Adjective Scale--Revised, Big Five (J. S. Wiggins & P. D. Trapnell, 1997) under standard instructions, and their therapists completed the observer rating form of the NEO Five-Factor Inventory. A comparison group of 30 students completed the NEO PI-R under standard instructions. Standard and fake-good participants obtained significantly different NEO PI-R domain scores. Correlations between the NEO PI-R and criterion measures were significantly lower for faking than for standard patients. Validity scales for the NEO PI-R (J. A. Schinka, B. N. Kinder, & T. Kremer, 1997) were moderately accurate in discriminating faking from standard patients, but were only marginally accurate in discriminating faking patients from students.

  10. Interstitial water studies on small core samples, Deep Sea Drilling Project: Leg 10

    USGS Publications Warehouse

    Manheim, Frank T.; Sayles, Fred L.; Waterman, Lee S.

    1973-01-01

    Leg 10 interstitial water analyses provide new indications of the distribution of rock salt beneath the floor of the Gulf of Mexico, both confirming areas previously indicated to be underlain by salt bodies and extending evidence of salt distribution to seismically featureless areas in the Sigsbee Knolls trend and Isthmian Embayment. The criterion for presence of salt at depth is a consistent increase in interstitial salinity and chlorinity with depth. Site 86, on the northern margin of the Yucatan Platform, provided no evidence of salt at depth. Thus, our data tend to rule out the suggestion of Antoine and Bryant (1969) that the Sigsbee Knolls salt was squeezed out from beneath the Yucatan Scarp. Cores from Sites 90 and 91, in the central Sigsbee Deep, were not obtained from a great enough depth to yield definite evidence for the presence of buried salt.

  11. Recognition of maximum flooding events in mixed siliciclastic-carbonate systems: Key to global chronostratigraphic correlation

    USGS Publications Warehouse

    Mancini, E.A.; Tew, B.H.

    1997-01-01

    The maximum flooding event within a depositional sequence is an important datum for correlation because it represents a virtually synchronous horizon. This event is typically recognized by a distinctive physical surface and/or a significant change in microfossil assemblages (relative fossil abundance peaks) in siliciclastic deposits from shoreline to continental slope environments in a passive margin setting. Recognition of maximum flooding events in mixed siliciclastic-carbonate sediments is more complicated because the entire section usually represents deposition in continental shelf environments with varying rates of biologic and carbonate productivity versus siliciclastic influx. Hence, this event cannot be consistently identified simply by relative fossil abundance peaks. Factors such as siliciclastic input, carbonate productivity, sediment accumulation rates, and paleoenvironmental conditions dramatically affect the relative abundances of microfossils. Failure to recognize these complications can lead to a sequence stratigraphic interpretation that substantially overestimates the number of depositional sequences of 1 to 10 m.y. duration.

  12. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  13. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  14. Confort 15 model of conduit dynamics: applications to Pantelleria Green Tuff and Etna 122 BC eruptions

    NASA Astrophysics Data System (ADS)

    Campagnola, S.; Romano, C.; Mastin, L. G.; Vona, A.

    2016-06-01

    Numerical simulations are useful tools to illustrate how flow parameters and physical processes may affect eruption dynamics of volcanoes. In this paper, we present an updated version of the Conflow model, an open-source numerical model for flow in eruptive conduits during steady-state pyroclastic eruptions (Mastin and Ghiorso in A numerical program for steady-state flow of magma-gas mixtures through vertical eruptive conduits. U.S. Geological Survey Open File Report 00-209, 2000). In the modified version, called Confort 15, the rheological constraints are improved, incorporating the most recent constitutive equations of both the liquid viscosity and crystal-bearing rheology. This allows all natural magma compositions, including the peralkaline melts excluded in the original version, to be investigated. The crystal-bearing rheology is improved by computing the effect of strain rate and crystal shape on the rheology of natural magmatic suspensions and expanding the crystal content range in which rheology can be modeled compared to the original version ( Conflow is applicable to magmatic mixtures with up to 30 vol% crystal content). Moreover, volcanological studies of the juvenile products (crystal and vesicle size distribution) of the investigated eruption are directly incorporated into the modeling procedure. Vesicle number densities derived from textural analyses are used to calculate, through Toramaru equations, maximum decompression rates experienced during ascent. Finally, both degassing under equilibrium and disequilibrium conditions are considered. This allows considerations on the effect of different fragmentation criteria on the conduit flow analyses, the maximum volume fraction criterion ("porosity criterion"), the brittle fragmentation criterion and the overpressure fragmentation criterion. Simulations of the pantelleritic and trachytic phases of the Green Tuff (Pantelleria) and of the Plinian Etna 122 BC eruptions are performed to test the upgrades in the Confort 15 modeling. Conflow and Confort 15 numerical results are compared analyzing the effect of viscosity, decompression rate, temperature, fragmentation criteria (critical strain rate, porosity and overpressure criteria) and equilibrium versus disequilibrium degassing in the magma flow along volcanic conduits. The equilibrium simulation results indicate that an increase in viscosity, a faster decompression rate, a decrease in temperature or the application of the porosity criterion in place of the strain rate one produces a deepening in fragmentation depth. Initial velocity and mass flux of the mixture are directly correlated with each other, inversely proportional to an increase in viscosity, except for the case in which a faster decompression rate is assumed. Taking into account up-to-date viscosity parameterization or input faster decompression rate, a much larger decrease in the average pressure along the conduit compared to previous studies is recorded, enhancing water exsolution and degassing. Disequilibrium degassing initiates only at very shallow conditions near the surface. Brittle fragmentation (i.e., depending on the strain rate criterion) in the pantelleritic Green Tuff eruption simulations is mainly a function of the initial temperature. In the case of the Etna 122 BC Plinian eruption, the viscosity strongly affects the magma ascent dynamics along the conduit. Using Confort 15, and therefore incorporating the most recent constitutive rheological parameterizations, we could calculate the mixture viscosity increase due to the presence of microlites. Results show that these seemingly low-viscosity magmas can explosively fragment in a brittle manner. Mass fluxes resulting from simulations which better represent the natural case (i.e., microlite-bearing) are consistent with values found in the literature for Plinian eruptions (~106 kg/s). The disequilibrium simulations, both for Green Tuff and Etna 122 BC eruptions, indicate that overpressure sufficient for fragmentation (if present) occurs only at very shallow conditions near the surface.

  15. Surface expression of Eastern Mediterranean slab dynamics: Uplift at the SW margin of the Central Anatolian Plateau

    NASA Astrophysics Data System (ADS)

    Schildgen, T. F.; Cosentino, D.; Caruso, A.; Yildirim, C.; Echtler, H.; Strecker, M. R.

    2011-12-01

    The Central Anatolian plateau in Turkey borders one of the most complex tectonic regions on Earth, where collision of the Arabian plate with Eurasia in Eastern Anatolia transitions to a cryptic pattern of subduction of the African beneath the Eurasian plate, with concurrent westward extrusion of the Anatolian microplate. Topographic growth of the southern margin of the Central Anatolian plateau has proceeded in discrete stages that can be distinguished based on the outcrop pattern and ages of uplifted marine sediments. These marine units, together with older basement rocks and younger continental sedimentary fills, also record an evolving nature of crustal deformation and uplift patterns that can be used to test the viability of different uplift mechanisms that have contributed to generate the world's third-largest orogenic plateau. Late Miocene marine sediments outcrop along the SW plateau margin at 1.5 km elevation, while they blanket the S and SE margins at up to more than 2 km elevation. Our new biostratigraphic data limit the age of 1.5-km-high marine sediments along the SW plateau margin to < 7.17 Ma, while regional lithostratigraphic correlations imply that the age is < 6.7 Ma. After reconstructing the post-Late Miocene surface uplift pattern from elevations of uplifted marine sediments and geomorphic reference surfaces, it is clear that regional surface uplift reaches maximum values along the modern plateau margin, with the SW margin experiencing less cumulative uplift compared to the S and SE margins. Our structural measurements and inversion modeling of faults within the uplifted region agree with previous findings in surrounding regions, with early contraction followed by strike-slip and extensional deformation. Shallow earthquake focal mechanisms show that the extensional phase has continued to the present. Broad similarities in the onset of surface uplift (after 7 Ma) and a change in the kinematic evolution of the plateau margin (after 8 Ma) suggest that these phenomena may have been linked with a change in the tectonic stress field associated with the process(es) causing post-7 Ma surface uplift. The complex geometry of lithospheric slabs beneath the southern plateau margin, early Pliocene to recent alkaline volcanism, and the localized uplift pattern with accompanying tensional/transtensional stresses point toward slab tearing and localized heating at the base of the lithosphere as a probable mechanism for post-7 Ma uplift of the SW margin. Considering previous work in the region, slab break-off is more likely responsible for non-contractional uplift along the S and SE margins. Overall there appears to be an important link between slab dynamics and surface uplift across the whole southern margin of the Central Anatolian plateau.

  16. The provision of clearances accuracy in piston - cylinder mating

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.

    2017-08-01

    The paper is aimed at increasing the quality of the pumping equipment in oil and gas industry. The main purpose of the study is to stabilize maximum values of productivity and durability of the pumping equipment based on the selective assembly of the cylinder-piston kinematic mating by optimization criterion. It is shown that the minimum clearance in the piston-cylinder mating is formed by maximum material dimensions. It is proved that maximum material dimensions are characterized by their own laws of distribution within the tolerance limits for the diameters of the cylinder internal mirror and the outer cylindrical surface of the piston. At that, their dispersion zones should be divided into size groups with a group tolerance equal to half the tolerance for the minimum clearance. The techniques for measuring the material dimensions - the smallest cylinder diameter and the largest piston diameter according to the envelope condition - are developed for sorting them into size groups. Reliable control of the dimensions precision ensures optimal minimum clearances of the piston-cylinder mating in all the size groups of the pumping equipment, necessary for increasing the equipment productivity and durability during the production, operation and repair processes.

  17. Comparative evaluation of human heat stress indices on selected hospital admissions in Sydney, Australia.

    PubMed

    Goldie, James; Alexander, Lisa; Lewis, Sophie C; Sherwood, Steven

    2017-08-01

    To find appropriate regression model specifications for counts of the daily hospital admissions of a Sydney cohort and determine which human heat stress indices best improve the models' fit. We built parent models of eight daily counts of admission records using weather station observations, census population estimates and public holiday data. We added heat stress indices; models with lower Akaike Information Criterion scores were judged a better fit. Five of the eight parent models demonstrated adequate fit. Daily maximum Simplified Wet Bulb Globe Temperature (sWBGT) consistently improved fit more than most other indices; temperature and heatwave indices also modelled some health outcomes well. Humidity and heat-humidity indices better fit counts of patients who died following admission. Maximum sWBGT is an ideal measure of heat stress for these types of Sydney hospital admissions. Simple temperature indices are a good fallback where a narrower range of conditions is investigated. Implications for public health: This study confirms the importance of selecting appropriate heat stress indices for modelling. Epidemiologists projecting Sydney hospital admissions should use maximum sWBGT as a common measure of heat stress. Health organisations interested in short-range forecasting may prefer simple temperature indices. © 2017 The Authors.

  18. Integrated optic single-ring filter for narrowband phase demodulation

    NASA Astrophysics Data System (ADS)

    Madsen, C. K.

    2017-05-01

    Integrated optic notch filters are key building blocks for higher-order spectral filter responses and have been demonstrated in many technology platforms from dielectrics (such as Si3N4) to semiconductors (Si photonics). Photonic-assisted RF processing applications for notch filters include identifying and filtering out high-amplitude, narrowband signals that may be interfering with the desired signal, including undesired frequencies detected in radar and free-space optical links. The fundamental tradeoffs for bandwidth and rejection depth as a function of the roundtrip loss and coupling coefficient are investigated along with the resulting spectral phase response for minimum-phase and maximum-phase responses compared to the critical coupling condition and integration within a Mach Zehnder interferometer. Based on a full width at half maximum criterion, it is shown that maximum-phase responses offer the smallest bandwidths for a given roundtrip loss. Then, a new role for passive notch filters in combination with high-speed electro-optic phase modulation is explored around narrowband phase-to-amplitude demodulation using a single ring operating on one sideband. Applications may include microwave processing and instantaneous frequency measurement (IFM) for radar, space and defense applications.

  19. Effects of diffusion on total biomass in heterogeneous continuous and discrete-patch systems

    USGS Publications Warehouse

    DeAngelis, Donald L.; Ming Ni, Wei; Zhang, Bo

    2016-01-01

    Theoretical models of populations on a system of two connected patches previously have shown that when the two patches differ in maximum growth rate and carrying capacity, and in the limit of high diffusion, conditions exist for which the total population size at equilibrium exceeds that of the ideal free distribution, which predicts that the total population would equal the total carrying capacity of the two patches. However, this result has only been shown for the Pearl-Verhulst growth function on two patches and for a single-parameter growth function in continuous space. Here, we provide a general criterion for total population size to exceed total carrying capacity for three commonly used population growth rates for both heterogeneous continuous and multi-patch heterogeneous landscapes with high population diffusion. We show that a sufficient condition for this situation is that there is a convex positive relationship between the maximum growth rate and the parameter that, by itself or together with the maximum growth rate, determines the carrying capacity, as both vary across a spatial region. This relationship occurs in some biological populations, though not in others, so the result has ecological implications.

  20. Implementation of test for quality assurance in nuclear medicine gamma camera

    NASA Astrophysics Data System (ADS)

    Moreno, A. Montoya; Laguna, A. Rodríguez; Zamudio, Flavio E. Trujillo

    2012-10-01

    In nuclear medicine (NM) over 90% of procedures are performed for diagnostic purposes. To ensure adequate diagnostic quality of images and the optimization of the doses received by patients originated from the radioactive material is essential for regular monitoring and equipment performance through a quality assurance program (QAP). The QAP consists of 15 proposed performance tomographic and not tomographic gamma camera (GC) tests, and is based on recommendations of international organizations. We describe some results of the performance parameters of QAP applied to a GC model e.cam Siemens, of the Department of NM of the National Cancer Institute of Mexico (INCan). The results were: (1) The average intrinsic spatial resolution (Rin) was 4.67 ± 0.25 mm at the limit of acceptance criterion of 4.4 mm. (2) The sensitivity extrinsic (Sext), with maximum variations of 1.8% (less than 2% which is the criterion of acceptance). (3) Rotational Uniformity (Urot), with values of integral uniformity (IU) in the useful field of view detector (UFOV), with maximum percentage change of 0.97% and monthly variations equal angles, ranging from 0.13 to 0.99% less than 1%. (4) The displacement of the center of rotation (DCOR), indicated a maximum deviation of 0.155 ± 0.039 mm less than 4.795 mm, an absolute deviation of less than 0.5 where pixel 0.085 pixel is suggested, the criteria are assigned to low-energy collimator high resolution. (5) In tomographic uniformity (Utomo), UI values (%) and percentage noise level (rms%) were 7.54 ± 1.53 and 4.18 ± 1.69 which are consistent with the limits of acceptance of 7.0-12.0% and 3.0-6.0% respectively. The smallest cold sphere has a diameter of 11.4 mm. The implementation of a QAP allows for high quality diagnostic images, optimization of the doses given to patients, a reduction of exposure to occupationally exposed workers (POE, by its Spanish acronym), and generally improves the productivity of the service. This proposal can be used to develop a similar QAP in other facilities and may serve as a precedent for the proposed regulations for quality assurance (QA) teams in MN.

  1. Marginal Structural Models with Counterfactual Effect Modifiers.

    PubMed

    Zheng, Wenjing; Luo, Zhehui; van der Laan, Mark J

    2018-06-08

    In health and social sciences, research questions often involve systematic assessment of the modification of treatment causal effect by patient characteristics. In longitudinal settings, time-varying or post-intervention effect modifiers are also of interest. In this work, we investigate the robust and efficient estimation of the Counterfactual-History-Adjusted Marginal Structural Model (van der Laan MJ, Petersen M. Statistical learning of origin-specific statically optimal individualized treatment rules. Int J Biostat. 2007;3), which models the conditional intervention-specific mean outcome given a counterfactual modifier history in an ideal experiment. We establish the semiparametric efficiency theory for these models, and present a substitution-based, semiparametric efficient and doubly robust estimator using the targeted maximum likelihood estimation methodology (TMLE, e.g. van der Laan MJ, Rubin DB. Targeted maximum likelihood learning. Int J Biostat. 2006;2, van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data, 1st ed. Springer Series in Statistics. Springer, 2011). To facilitate implementation in applications where the effect modifier is high dimensional, our third contribution is a projected influence function (and the corresponding projected TMLE estimator), which retains most of the robustness of its efficient peer and can be easily implemented in applications where the use of the efficient influence function becomes taxing. We compare the projected TMLE estimator with an Inverse Probability of Treatment Weighted estimator (e.g. Robins JM. Marginal structural models. In: Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1-10. 1997a, Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. 2000;11:561-570), and a non-targeted G-computation estimator (Robins JM. A new approach to causal inference in mortality studies with sustained exposure periods - application to control of the healthy worker survivor effect. Math Modell. 1986;7:1393-1512.). The comparative performance of these estimators is assessed in a simulation study. The use of the projected TMLE estimator is illustrated in a secondary data analysis for the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial where effect modifiers are subject to missing at random.

  2. Comparison of two methods for detection of strain localization in sheet forming

    NASA Astrophysics Data System (ADS)

    Lumelskyj, Dmytro; Lazarescu, Lucian; Banabic, Dorel; Rojek, Jerzy

    2018-05-01

    This paper presents a comparison of two criteria of strain localization in experimental research and numerical simulation of sheet metal forming. The first criterion is based on the analysis of the through-thickness thinning (through-thickness strain) and its first time derivative in the most strained zone. The limit strain in the second method is determined by the maximum of the strain acceleration. Experimental and numerical investigation have been carried out for the Nakajima test performed for different specimens of the DC04 grade steel sheet. The strain localization has been identified by analysis of experimental and numerical curves showing the evolution of strains and their derivatives in failure zones. The numerical and experimental limit strains calculated from both criteria have been compared with the experimental FLC evaluated according to the ISO 12004-2 norm. It has been shown that the first method predicts formability limits closer to the experimental FLC. The second criterion predicts values of strains higher than FLC determined according to ISO norm. These values are closer to the strains corresponding to the fracture limit. The results show that analysis of strain evolution allows us to determine strain localization in numerical simulation and experimental studies.

  3. Self-masking: Listening during vocalization. Normal hearing.

    PubMed

    Borg, Erik; Bergkvist, Christina; Gustafsson, Dan

    2009-06-01

    What underlying mechanisms are involved in the ability to talk and listen simultaneously and what role does self-masking play under conditions of hearing impairment? The purpose of the present series of studies is to describe a technique for assessment of masked thresholds during vocalization, to describe normative data for males and females, and to focus on hearing impairment. The masking effect of vocalized [a:] on narrow-band noise pulses (250-8000 Hz) was studied using the maximum vocalization method. An amplitude-modulated series of sound pulses, which sounded like a steam engine, was masked until the criterion of halving the perceived pulse rate was reached. For masking of continuous reading, a just-follow-conversation criterion was applied. Intra-session test-retest reproducibility and inter-session variability were calculated. The results showed that female voices were more efficient in masking high frequency noise bursts than male voices and more efficient in masking both a male and a female test reading. The male had to vocalize 4 dBA louder than the female to produce the same masking effect on the test reading. It is concluded that the method is relatively simple to apply and has small intra-session and fair inter-session variability. Interesting gender differences were observed.

  4. Prediction of Central Burst Defects in Copper Wire Drawing Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, G.; NEXANS France, NMC Nexans Metallurgy Centre, Boulevard du Marais, BP39, F-62301 Lens; Haddi, A.

    2011-01-17

    In this study, the prediction of chevron cracks (central bursts) in copper wire drawing process is investigated using experimental and numerical approaches. The conditions of the chevron cracks creation along the wire axis depend on (i) the die angle, the friction coefficient between the die and the wire, (ii) the reduction in crosssectional area of the wire, (iii) the material properties and (iv) the drawing velocity or strain rate. Under various drawing conditions, a numerical simulation for the prediction of central burst defects is presented using an axisymmetric finite element model. This model is based on the application of themore » Cockcroft and Latham fracture criterion. This criterion was used as the damage value to estimate if and where defects will occur during the copper wire drawing. The critical damage value of the material is obtained from a uniaxial tensile test. The results show that the die angle and the reduction ratio have a significant effect on the stress distribution and the maximum damage value. The central bursts are expected to occur when the die angle and reduction ratio reach a critical value. Numerical predictions are compared with experimental observations.« less

  5. How large can the electron to proton mass ratio be in particle-in-cell simulations of unstable systems?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bret, A.; Dieckmann, M. E.

    2010-03-15

    Particle-in-cell simulations are widely used as a tool to investigate instabilities that develop between a collisionless plasma and beams of charged particles. However, even on contemporary supercomputers, it is not always possible to resolve the ion dynamics in more than one spatial dimension with such simulations. The ion mass is thus reduced below 1836 electron masses, which can affect the plasma dynamics during the initial exponential growth phase of the instability and during the subsequent nonlinear saturation. The goal of this article is to assess how far the electron to ion mass ratio can be increased, without changing qualitatively themore » physics. It is first demonstrated that there can be no exact similarity law, which balances a change in the mass ratio with that of another plasma parameter, leaving the physics unchanged. Restricting then the analysis to the linear phase, a criterion allowing to define a maximum ratio is explicated in terms of the hierarchy of the linear unstable modes. The criterion is applied to the case of a relativistic electron beam crossing an unmagnetized electron-ion plasma.« less

  6. Time-frequency analysis-based time-windowing algorithm for the inverse synthetic aperture radar imaging of ships

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong

    2018-01-01

    An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.

  7. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  8. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  9. Late glacial and Holocene history of the Greenland Ice Sheet margin, Nunatarssuaq, Northwestern Greenland

    NASA Astrophysics Data System (ADS)

    Farnsworth, L. B.; Kelly, M. A.; Axford, Y.; Bromley, G. R.; Osterberg, E. C.; Howley, J. A.; Zimmerman, S. R. H.; Jackson, M. S.; Lasher, G. E.; McFarlin, J. M.

    2015-12-01

    Defining the late glacial and Holocene fluctuations of the Greenland Ice Sheet (GrIS) margin, particularly during periods that were as warm or warmer than present, provides a longer-term perspective on present ice margin fluctuations and informs how the GrIS may respond to future climate conditions. We focus on mapping and dating past GrIS extents in the Nunatarssuaq region of northwestern Greenland. During the summer of 2014, we conducted geomorphic mapping and collected rock samples for 10Be surface exposure dating as well as subfossil plant samples for 14C dating. We also obtained sediment cores from an ice-proximal lake. Preliminary 10Be ages of boulders deposited during deglaciation of the GrIS subsequent to the Last Glacial Maximum range from ~30-15 ka. The apparently older ages of some samples indicate the presence of 10Be inherited from prior periods of exposure. These ages suggest deglaciation occurred by ~15 ka however further data are needed to test this hypothesis. Subfossil plants exposed at the GrIS margin on shear planes date to ~ 4.6-4.8 cal. ka BP and indicate less extensive ice during middle Holocene time. Additional radiocarbon ages from in situ subfossil plants on a nunatak date to ~3.1 cal. ka BP. Geomorphic mapping of glacial landforms near Nordsø, a large proglacial lake, including grounding lines, moraines, paleo-shorelines, and deltas, indicate the existence of a higher lake level that resulted from a more extensive GrIS margin likely during Holocene time. A fresh drift limit, characterized by unweathered, lichen-free clasts approximately 30-50 m distal to the modern GrIS margin, is estimated to be late Holocene in age. 10Be dating of samples from these geomorphic features is in progress. Radiocarbon ages of subfossil plants exposed by recent retreat of the GrIS margin suggest that the GrIS was at or behind its present location at AD ~1650-1800 and ~1816-1889. Results thus far indicate that the GrIS margin in northwestern Greenland responded sensitively to Holocene climate changes. Ongoing research will improve the chronological constraints on these fluctuations.

  10. Evolution of the Marginal Ice Zone: Adaptive Sampling with Autonomous Gliders

    DTIC Science & Technology

    2015-09-30

    kinetic energy (ε). Gliders also sampled dissolved oxygen, optical backscatter ( chlorophyll and CDOM fluorescence) and multi-spectral downwelling...Fig. 2). In the pack, Pacific Summer Water and a deep chlorophyll maximum form distinct layers at roughly 60 m and 80 m, respectively, which become...Sections across the ice edge just prior to recovery, during freeze-up, reveal elevated chlorophyll fluorescence throughout the mixed layer (Fig. 4

  11. Fractional watt Vuillemier cryogenic refrigerator program engineering notebook. Volume 2: Stress analysis

    NASA Technical Reports Server (NTRS)

    Miller, W. S.

    1974-01-01

    A structural analysis performed on the 1/4-watt cryogenic refrigerator. The analysis covered the complete assembly except for the cooling jacket and mounting brackets. Maximum stresses, margin of safety, and natural frequencies were calculated for structurally loaded refrigerator components shown in assembly drawings. The stress analysis indicates that the design is satisfactory for the specified vibration environment, and the proof, burst, and normal operating loads.

  12. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  13. A comparison of hydraulic architecture in three similarly sized woody species differing in their maximum potential height.

    PubMed

    McCulloh, Katherine A; Johnson, Daniel M; Petitmermet, Joshua; McNellis, Brandon; Meinzer, Frederick C; Lachenbruch, Barbara

    2015-07-01

    The physiological mechanisms underlying the short maximum height of shrubs are not understood. One possible explanation is that differences in the hydraulic architecture of shrubs compared with co-occurring taller trees prevent the shrubs from growing taller. To explore this hypothesis, we examined various hydraulic parameters, including vessel lumen diameter, hydraulic conductivity and vulnerability to drought-induced embolism, of three co-occurring species that differed in their maximum potential height. We examined one species of shrub, one short-statured tree and one taller tree. We worked with individuals that were approximately the same age and height, which was near the maximum for the shrub species. A number of variables correlated with the maximum potential height of the species. For example, vessel diameter and vulnerability to embolism both increased while wood density declined with maximum potential height. The difference between the pressure causing 50% reduction in hydraulic conductance in the leaves and the midday leaf water potential (the leaf's hydraulic safety margin) was much larger in the shrub than the other two species. In general, trends were consistent with understory shrubs having a more conservative life history strategy than co-occurring taller species. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Advanced Power Conditioning System

    NASA Technical Reports Server (NTRS)

    Johnson, N. L.

    1971-01-01

    The second portion of the advanced power conditioning system development program is reported. Five 100-watt parallel power stages with majority-vote-logic feedback-regulator were breadboarded and tested to the design goals. The input voltage range was 22.1 to 57.4 volts at loads from zero to 500 watts. The maximum input ripple current was 200 mA pk-pk (not including spikes) at 511 watts load; the output voltage was 56V dc with a maximum change of 0.89 volts for all variations of line, load, and temperature; the maximum output ripple was 320 mV pk-pk at 512 watts load (dependent on filter capacitance value); the maximum efficiency was 93.9% at 212 watts and 50V dc input; the minimum efficiency was 87.2% at 80-watt load and 50V dc input; the efficiency was above 90% from 102 watts to 372 watts; the maximum excursion for an 80-watt load change was 2.1 volts with a recovery time of 7 milliseconds; and the unit performed within regulation limits from -20 C to +85 C. During the test sequence, margin tests and failure mode tests were run with no resulting degradation in performance.

  15. A study of longitudinal tumor motion in helical tomotherapy using a cylindrical phantom

    PubMed Central

    Klein, Michael; Gaede, Stewart

    2013-01-01

    Tumor motion during radiation treatment on a helical tomotherapy unit may create problems due to interplay with motion of the multileaf collimator, gantry rotation, and patient couch translation through the gantry. This study evaluated this interplay effect for typical clinical parameters using a cylindrical phantom consisting of 1386 diode detectors placed on a respiratory motion platform. All combinations of radiation field widths (1, 2.5, and 5 cm) and gantry rotation periods (16, 30, and 60 s) were considered for sinusoidal motions with a period of 4 s and amplitudes of 5, 6, 7, 8, 9, and 10 mm, as well as real patient breathing pattern. Gamma comparisons with 2% dose difference and 2 mm distance to agreement and dose profiles were used for evaluation. The required motion margins were determined for each set of parameters. The required margin size increased with decreasing field width and increasing tumor motion amplitude, but was not affected by rotation period. The plans with the smallest field width of 1 cm have required motion margins approximately equal to the amplitude of motion (±25%), while those with the largest field width of 5 cm had required motion margins approximately equal to 20% of the motion amplitude (±20%). For tumor motion amplitudes below 6 mm and field widths above 1 cm, the required additional motion margins were very small, at a maximum of 2.5 mm for sinusoidal breathing patterns and 1.2 mm for the real patient breathing pattern. PACS numbers: 87.55.km, 87.55.Qr, 87.56.Fc

  16. Changes in northeast Atlantic hydrology during Termination 1: Insights from Celtic margin's benthic foraminifera

    NASA Astrophysics Data System (ADS)

    Mojtahid, M.; Toucanne, S.; Fentimen, R.; Barras, C.; Le Houedec, S.; Soulet, G.; Bourillet, J.-F.; Michel, E.

    2017-11-01

    Using benthic foraminiferal-based proxies in sediments from the Celtic margin, we provide a well-dated record across the last deglaciation of the Channel River dynamics and its potential impact on the hydrology of intermediate water masses along the European margin. Our results describe three main periods: 1) During the Last Glacial Maximum, and before ∼21 ka BP, the predominance of meso-oligotrophic species suggests well oxygenated water masses. After ∼21 ka BP, increasing proportions of eutrophic species related to enhanced riverine supply occurs concomitantly with early warming in Greenland air-temperatures; 2) A thick laminated deposit, occurring during a 1500-years long period of seasonal melting of the European Ice Sheet (EIS), is associated with early Heinrich Stadial 1 period (∼18.2-16.7 ka BP). The benthic proxies describe low salinity episodes, cold temperatures, severe dysoxia and eutrophic conditions on the sea floor, perhaps evidence for cascading of turbid meltwaters; 3) During late HS1 (∼16.7-14.7 ka BP), conditions on the Celtic margin's seafloor changed drastically and faunas indicate oligotrophic conditions as a result of the ceasing of EIS meltwater discharges. While surface waters were cold due to Laurentide Ice Sheet (LIS) icebergs releases, increasing benthic Mg/Ca ratios reveal a progressive warming of intermediate water masses whereas oxygen proxies indicate overall well oxygenated conditions. In addition to the well known effect of EIS meltwaters on surface waters in the Celtic margin, our benthic record documents a pronounced impact on intermediate water depths during HS1, which coincided with major AMOC disruptions.

  17. Sedimentary record of a fluctuating ice margin from the Pennsylvanian of western Gondwana: Paraná Basin, southern Brazil

    NASA Astrophysics Data System (ADS)

    Vesely, Fernando F.; Trzaskos, Barbara; Kipper, Felipe; Assine, Mario Luis; Souza, Paulo A.

    2015-08-01

    The Paraná Basin is a key locality in the context of the Late Paleozoic Ice Age (LPIA) because of its location east of the Andean proto-margin of Gondwana and west of contiguous interior basins today found in western Africa. In this paper we document the sedimentary record associated with an ice margin that reached the eastern border of the Paraná Basin during the Pennsylvanian, with the aim of interpreting the depositional environments and discussing paleogeographic implications. The examined stratigraphic succession is divided in four stacked facies associations that record an upward transition from subglacial to glaciomarine environments. Deposition took place during deglaciation but was punctuated by minor readvances of the ice margin that deformed the sediment pile. Tillites, well-preserved landforms of subglacial erosion and glaciotectonic deformational structures indicate that the ice flowed to the north and northwest and that the ice margin did not advance far throughout the basin during the glacial maximum. Consequently, time-equivalent glacial deposits that crop out in other localities of eastern Paraná Basin are better explained by assuming multiple smaller ice lobes instead of one single large glacier. These ice lobes flowed from an ice cap covering uplifted lands now located in western Namibia, where glacial deposits are younger and occur confined within paleovalleys cut onto the Precambrian basement. This conclusion corroborates the idea of a topographically-controlled ice-spreading center in southwestern Africa and does not support the view of a large polar ice sheet controlling deposition in the Paraná Basin during the LPIA.

  18. Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.

  19. Study of Interaction of Reinforcement with Concrete by Numerical Methods

    NASA Astrophysics Data System (ADS)

    Tikhomirov, V. M.; Samoshkin, A. S.

    2018-01-01

    This paper describes the study of deformation of reinforced concrete. A mathematical model for the interaction of reinforcement with concrete, based on the introduction of a contact layer, whose mechanical characteristics are determined from the experimental data, is developed. The limiting state of concrete is described using the Drucker-Prager theory and the fracture criterion with respect to maximum plastic deformations. A series of problems of the theory of reinforced concrete are solved: stretching of concrete from a central-reinforced prism and pre-stressing of concrete. It is shown that the results of the calculations are in good agreement with the experimental data.

  20. On the feasibility of detecting extrasolar planets by reflected starlight using the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Brown, Robert A.; Burrows, Christopher J.

    1990-01-01

    The best metrology data extant are presently used to estimate the center and wing point-spread function of the HST, in order to ascertain the implications of an observational criterion according to which a faint source's discovery can occur only when the signal recorded near its image's location is sufficiently larger than would be expected in its absence. After defining the maximum star-planet flux ratio, a figure of merit Q, defined as the contrast ratio between a 'best case' planet and the scattered starlight background, is introduced and shown in the HST's case to be unfavorable for extrasolar planet detection.

  1. Layer-by-layer design method for soft-X-ray multilayers

    NASA Technical Reports Server (NTRS)

    Yamamoto, Masaki; Namioka, Takeshi

    1992-01-01

    A new design method effective for a nontransparent system has been developed for soft-X-ray multilayers with the aid of graphic representation of the complex amplitude reflectance in a Gaussian plane. The method provides an effective means of attaining the absolute maximum reflectance on a layer-by-layer basis and also gives clear insight into the evolution of the amplitude reflectance on a multilayer as it builds up. An optical criterion is derived for the selection of a proper pair of materials needed for designing a high-reflectance multilayer. Some examples are given to illustrate the usefulness of this design method.

  2. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    PubMed

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.

  3. ITER Side Correction Coil Quench model and analysis

    NASA Astrophysics Data System (ADS)

    Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.

    2016-12-01

    Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.

  4. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  5. Human rather than ape-like orbital morphology allows much greater lateral visual field expansion with eye abduction

    PubMed Central

    Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric

    2015-01-01

    While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625

  6. Analysis of seaweed marketing in warbal village, Southeast Maluku Regency, Indonesia

    NASA Astrophysics Data System (ADS)

    Tumiwa, Bruri B.; Renjaan, Meiskyana R.; B. A Somnaikubun, Glen; Betauubun, Kamilius D.; Hungan, Marselus

    2017-10-01

    Seaweed in Warbal Village, West Kei Kecil Subdistrict, Southeast Maluku Regency has prospects and business opportunities are adequate to give hope to farmers in improving welfare. The fact that seaweed farming has not yet provided better and maximum results as desired by the farmers. This study aims to evaluation the marketing channels, marketing margins and profit share of marketing agencies. The research is located in Warbal Village, West Kei Kecil Subdistrict, Southeast Maluku Regency which is determined purposively. The number of sample is 30 farmers taken by simple random sampling, 2 wholesaler traders and 2 collector traders taken by using snowball method. The data collection methods is interview and questionnaire directly to farmers and marketing agencies, literary method or data collector from institutions related to the research’s aims. The research results show that there is two marketing channel, as follows: Channel I: farmers, wholesaler traders, collector traders, PAP; Channel II: farmers, collector traders, PAP. The magnitude of marketing margins is different between the marketing channels, and so it is with profit share of a marketing agency. On channel I, magnitude margin is IDR 3,250 and profit share is 71.11% on farmers, 17.76% on wholesaler traders and 11.09% on collector traders. On channel II, the magnitude of marketing margin is IDR 1,250 and profit share is 88.88% on farmers and 11.09% to collector traders.

  7. New Binary Systems With Asymmetric Light Curves

    NASA Astrophysics Data System (ADS)

    Virnina, Natalia A.

    2010-12-01

    We present the results of investigation of the light curves of 27 newly discovered binary systems. Among the examined curves, there were 10 curves with statistically significant asymmetry of maximums, according the 3σ criterion for the difference between the maximal brightness. Half of these 10 curves have a higher first maximum, another half the second one. Two of these 10 curves, USNO-B1.0 1629-0064825 = VSX J052807.9+725606 and USNO-B1.0 1586-0116785, show the largest difference between magnitudes in maxima. The star VSX J052807.9+725606 also shows the secondary minimum, which is shifted from the phase φ = 0.5. The shape of the curve argues that the physical processes of this star could be close to that of well known short periodic binary system V361 Lyr, which has a spot on the surface of one star of the system. Another star, USNO-B1.0 1586-0116785, probably has a cold spot, or several spots, in the photosphere of one of the components.

  8. Coefficient of performance and its bounds with the figure of merit for a general refrigerator

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Wei

    2015-02-01

    A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.

  9. Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution.

    PubMed

    Gangnon, Ronald E

    2012-03-01

    The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, whereas rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. © 2011, The International Biometric Society.

  10. Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution

    PubMed Central

    Gangnon, Ronald E.

    2011-01-01

    Summary The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, while rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. PMID:21762118

  11. Controlling a rabbet load and air/oil seal temperatures in a turbine

    DOEpatents

    Schmidt, Mark Christopher

    2002-01-01

    During a standard fired shutdown of a turbine, a loaded rabbet joint between the fourth stage wheel and the aft shaft of the machine can become unloaded causing a gap to occur due to a thermal mismatch at the rabbet joint with the bearing blower turned on. An open or unloaded rabbet could cause the parts to move relative to each other and therefore cause the rotor to lose balance. If the bearing blower is turned off during a shutdown, the forward air/oil seal temperature may exceed maximum design practice criterion due to "soak-back." An air/oil seal temperature above the established maximum design limits could cause a bearing fire to occur, with catastrophic consequences to the machine. By controlling the bearing blower according to an optimized blower profile, the rabbet load can be maintained, and the air/oil seal temperature can be maintained below the established limits. A blower profile is determined according to a thermodynamic model of the system.

  12. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  13. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  14. Comparative evaluation of marginal leakage of provisional crowns cemented with different temporary luting cements: In vitro study

    PubMed Central

    Arora, Sheen Juneja; Arora, Aman; Upadhyaya, Viram; Jain, Shilpi

    2016-01-01

    Background or Statement of Problem: As, the longevity of provisional restorations is related to, a perfect adaptation and a strong, long-term union between restoration and teeth structures, therefore, evaluation of marginal leakage of provisional restorative materials luted with cements using the standardized procedures is essential. Aims and Objectives: To compare the marginal leakage of the provisional crowns fabricated from Autopolymerizing acrylic resin crowns and bisphenol A-glycidyl dimethacrylate (BIS-GMA) resin crowns. To compare the marginal leakage of the provisional crowns fabricated from autopolymerizing acrylic resin crowns and BIS-GMA resin crowns cemented with different temporary luting cements. To compare the marginal leakage of the provisional crowns fabricated from autopolymerizing acrylic resin (SC-10) crowns cemented with different temporary luting cements. To compare the marginal leakage of the provisional crowns fabricated from BIS-GMA resin crowns (Protemp 4) cemented with different temporary luting cements. Methodology: Freshly extracted 60 maxillary premolars of approximately similar dimensions were mounted in dental plaster. Tooth reduction with shoulder margin was planned to use a customized handpiece-holding jig. Provisional crowns were prepared using the wax pattern fabricated from computer aided designing/computer aided manufacturing milling machine following the tooth preparation. Sixty provisional crowns were made, thirty each of SC-10 and Protemp 4 and were then cemented with three different luting cements. Specimens were thermocycled, submerged in a 2% methylene blue solution, then sectioned and observed under a stereomicroscope for the evaluation of marginal microleakage. A five-level scale was used to score dye penetration in the tooth/cement interface and the results of this study was analyzed using the Chi-square test, Mann–Whitney U-test, Kruskal–Wallis H-test and the results were statistically significant P < 0.05 the power of study - 80%. Results: Marginal leakage was significant in both provisional crowns cemented with three different luting cements along the axial walls of teeth (P < 0.05) confidence interval - 95%. Conclusion: The temporary cements with eugenol showed more microleakage than those without eugenol. SC-10 crowns showed more microleakage compared to Protemp 4 crowns. SC-10 crowns cemented with Kalzinol showed maximum microleakage and Protemp 4 crowns cemented with HY bond showed least microleakage. PMID:27134427

  15. CARES - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES calculates the fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings. The program uses results from a commercial structural analysis program (MSC/NASTRAN or ANSYS) to evaluate component reliability due to inherent surface and/or volume type flaws. A multiple material capability allows the finite element model reliability to be a function of many different ceramic material statistical characterizations. The reliability analysis uses element stress, temperature, area, and volume output, which are obtained from two dimensional shell and three dimensional solid isoparametric or axisymmetric finite elements. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multi-axial stress states on material strength. The shear-sensitive Batdorf model requires a user-selected flaw geometry and a mixed-mode fracture criterion. Flaws intersecting the surface and imperfections embedded in the volume can be modeled. The total strain energy release rate theory is used as a mixed mode fracture criterion for co-planar crack extension. Out-of-plane crack extension criteria are approximated by a simple equation with a semi-empirical constant that can model the maximum tangential stress theory, the minimum strain energy density criterion, the maximum strain energy release rate theory, or experimental results. For comparison, Griffith's maximum tensile stress theory, the principle of independent action, and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. A more limited program, CARES/PC (COSMIC number LEW-15248) runs on a personal computer and estimates ceramic material properties from three-point bend bar data. CARES/PC does not perform fast fracture reliability estimation. CARES is written in FORTRAN 77 and has been implemented on DEC VAX series computers under VMS and on IBM 370 series computers under VM/CMS. On a VAX, CARES requires 10Mb of main memory. Five MSC/NASTRAN example problems and two ANSYS example problems are provided. There are two versions of CARES supplied on the distribution tape, CARES1 and CARES2. CARES2 contains sub-elements and CARES1 does not. CARES is available on a 9-track 1600 BPI VAX FILES-11 format magnetic tape (standard media) or in VAX BACKUP format on a TK50 tape cartridge. The program requires a FORTRAN 77 compiler and about 12Mb memory. CARES was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. IBM 370 is a trademark of International Business Machines. MSC/NASTRAN is a trademark of MacNeal-Schwendler Corporation. ANSYS is a trademark of Swanson Analysis Systems, Inc.

  16. Antipodal hotspot pairs on the earth

    NASA Technical Reports Server (NTRS)

    Rampino, Michael R.; Caldeira, Ken

    1992-01-01

    The results of statistical analyses performed on three published hotspot distributions suggest that significantly more hotspots occur as nearly antipodal pairs than is anticipated from a random distribution, or from their association with geoid highs and divergent plate margins. The observed number of antipodal hotspot pairs depends on the maximum allowable deviation from exact antipodality. At a maximum deviation of not greater than 700 km, 26 to 37 percent of hotspots form antipodal pairs in the published lists examined here, significantly more than would be expected from the general hotspot distribution. Two possible mechanisms that might create such a distribution include: (1) symmetry in the generation of mantle plumes, and (2) melting related to antipodal focusing of seismic energy from large-body impacts.

  17. Validity of Various Methods for Determining Velocity, Force, and Power in the Back Squat.

    PubMed

    Banyard, Harry G; Nosaka, Ken; Sato, Kimitake; Haff, G Gregory

    2017-10-01

    To examine the validity of 2 kinematic systems for assessing mean velocity (MV), peak velocity (PV), mean force (MF), peak force (PF), mean power (MP), and peak power (PP) during the full-depth free-weight back squat performed with maximal concentric effort. Ten strength-trained men (26.1 ± 3.0 y, 1.81 ± 0.07 m, 82.0 ± 10.6 kg) performed three 1-repetition-maximum (1RM) trials on 3 separate days, encompassing lifts performed at 6 relative intensities including 20%, 40%, 60%, 80%, 90%, and 100% of 1RM. Each repetition was simultaneously recorded by a PUSH band and commercial linear position transducer (LPT) (GymAware [GYM]) and compared with measurements collected by a laboratory-based testing device consisting of 4 LPTs and a force plate. Trials 2 and 3 were used for validity analyses. Combining all 120 repetitions indicated that the GYM was highly valid for assessing all criterion variables while the PUSH was only highly valid for estimations of PF (r = .94, CV = 5.4%, ES = 0.28, SEE = 135.5 N). At each relative intensity, the GYM was highly valid for assessing all criterion variables except for PP at 20% (ES = 0.81) and 40% (ES = 0.67) of 1RM. Moreover, the PUSH was only able to accurately estimate PF across all relative intensities (r = .92-.98, CV = 4.0-8.3%, ES = 0.04-0.26, SEE = 79.8-213.1 N). PUSH accuracy for determining MV, PV, MF, MP, and PP across all 6 relative intensities was questionable for the back squat, yet the GYM was highly valid at assessing all criterion variables, with some caution given to estimations of MP and PP performed at lighter loads.

  18. An evaluation of freshwater mussel toxicity data in the derivation of water quality guidance and standards for copper

    USGS Publications Warehouse

    March, F.A.; Dwyer, F.J.; Augspurger, T.; Ingersoll, C.G.; Wang, N.; Mebane, C.A.

    2007-01-01

    The state of Oklahoma has designated several areas as freshwater mussel sanctuaries in an attempt to provide freshwater mussel species a degree of protection and to facilitate their reproduction. We evaluated the protection afforded freshwater mussels by the U.S. Environmental Protection Agency (U.S. EPA) hardness-based 1996 ambient copper water quality criteria, the 2007 U.S. EPA water quality criteria based on the biotic ligand model and the 2005 state of Oklahoma copper water quality standards. Both the criterion maximum concentration and criterion continuous concentration were evaluated. Published acute and chronic copper toxicity data that met American Society for Testing and Materials guidance for test acceptability were obtained for exposures conducted with glochidia or juvenile freshwater mussels. We tabulated toxicity data for glochidia and juveniles to calculate 20 species mean acute values for freshwater mussels. Generally, freshwater mussel species mean acute values were similar to those of the more sensitive species included in the U.S. EPA water quality derivation database. When added to the database of genus mean acute values used in deriving 1996 copper water quality criteria, 14 freshwater mussel genus mean acute values included 10 of the lowest 15 genus mean acute values, with three mussel species having the lowest values. Chronic exposure and sublethal effects freshwater mussel data available for four species and acute to chronic ratios were used to evaluate the criterion continuous concentration. On the basis of the freshwater mussel toxicity data used in this assessment, the hardness-based 1996 U.S. EPA water quality criteria, the 2005 Oklahoma water quality standards, and the 2007 U.S. EPA water quality criteria based on the biotic ligand model might need to be revised to afford protection to freshwater mussels. ?? 2007 SETAC.

  19. Great earthquakes along the Western United States continental margin: implications for hazards, stratigraphy and turbidite lithology

    NASA Astrophysics Data System (ADS)

    Nelson, C. H.; Gutiérrez Pastor, J.; Goldfinger, C.; Escutia, C.

    2012-11-01

    We summarize the importance of great earthquakes (Mw ≳ 8) for hazards, stratigraphy of basin floors, and turbidite lithology along the active tectonic continental margins of the Cascadia subduction zone and the northern San Andreas Transform Fault by utilizing studies of swath bathymetry visual core descriptions, grain size analysis, X-ray radiographs and physical properties. Recurrence times of Holocene turbidites as proxies for earthquakes on the Cascadia and northern California margins are analyzed using two methods: (1) radiometric dating (14C method), and (2) relative dating, using hemipelagic sediment thickness and sedimentation rates (H method). The H method provides (1) the best estimate of minimum recurrence times, which are the most important for seismic hazards risk analysis, and (2) the most complete dataset of recurrence times, which shows a normal distribution pattern for paleoseismic turbidite frequencies. We observe that, on these tectonically active continental margins, during the sea-level highstand of Holocene time, triggering of turbidity currents is controlled dominantly by earthquakes, and paleoseismic turbidites have an average recurrence time of ~550 yr in northern Cascadia Basin and ~200 yr along northern California margin. The minimum recurrence times for great earthquakes are approximately 300 yr for the Cascadia subduction zone and 130 yr for the northern San Andreas Fault, which indicates both fault systems are in (Cascadia) or very close (San Andreas) to the early window for another great earthquake. On active tectonic margins with great earthquakes, the volumes of mass transport deposits (MTDs) are limited on basin floors along the margins. The maximum run-out distances of MTD sheets across abyssal-basin floors along active margins are an order of magnitude less (~100 km) than on passive margins (~1000 km). The great earthquakes along the Cascadia and northern California margins cause seismic strengthening of the sediment, which results in a margin stratigraphy of minor MTDs compared to the turbidite-system deposits. In contrast, the MTDs and turbidites are equally intermixed on basin floors along passive margins with a mud-rich continental slope, such as the northern Gulf of Mexico. Great earthquakes also result in characteristic seismo-turbidite lithology. Along the Cascadia margin, the number and character of multiple coarse pulses for correlative individual turbidites generally remain constant both upstream and downstream in different channel systems for 600 km along the margin. This suggests that the earthquake shaking or aftershock signature is normally preserved, for the stronger (Mw ≥ 9) Cascadia earthquakes. In contrast, the generally weaker (Mw = or <8) California earthquakes result in upstream simple fining-up turbidites in single tributary canyons and channels; however, downstream mainly stacked turbidites result from synchronously triggered multiple turbidity currents that deposit in channels below confluences of the tributaries. Consequently, both downstream channel confluences and the strongest (Mw ≥ 9) great earthquakes contribute to multi-pulsed and stacked turbidites that are typical for seismo-turbidites generated by a single great earthquake. Earthquake triggering and multi-pulsed or stacked turbidites also become an alternative explanation for amalgamated turbidite beds in active tectonic margins, in addition to other classic explanations. The sedimentologic characteristics of turbidites triggered by great earthquakes along the Cascadia and northern California margins provide criteria to help distinguish seismo-turbidites in other active tectonic margins.

  20. The dynamics of continental breakup-related magmatism on the Norwegian volcanic margin

    NASA Astrophysics Data System (ADS)

    Breivik, A. J.; Faleide, J. I.; Mjelde, R.

    2007-12-01

    The Vøring margin off mid-Norway was initiated during the earliest Eocene (~54 Ma), and large volumes of magmatic rocks were emplaced during and after continental breakup. In 2003, an ocean bottom seismometer survey was acquired on the Norwegian margin to constrain continental breakup and early seafloor spreading processes. The profile P-wave model described here crosses the northern part of the Vøring Plateau. Maximum igneous crustal thickness was found to be 18 km, decreasing to ~6.5 km over ~6 M.y. after continental breakup. Both the volume and the duration of excess magmatism after breakup is about twice of what is observed off the Møre Margin south of the Jan Mayen Fracture Zone, which offsets the margin segments by ~170 km. A similar reduction in magmatism occurs to the north over an along-margin distance of ~100 km to the Lofoten margin, but without a margin offset. There is a strong correlation between magma productivity and early plate spreading rate, which are highest just after breakup, falling with time. This is seen both at the Møre and the Vøring margin segments, suggesting a common cause. A model for the breakup- related magmatism should be able to (1) explain this correlation, (2) the magma production peak at breakup, and (3) the magmatic segmentation. Proposed end-member hypotheses are elevated upper-mantle temperatures caused by a hot mantle plume, or edge-driven small-scale convection fluxing mantle rocks through the melt zone. Both the average P-wave velocity and the major-element data at the Vøring margin indicate a low degree of melting consistent with convection. However, small scale convection does not easily explain the issues listed above. An elaboration of the mantle plume model by N. Sleep, in which buoyant plume material fills the rift-topography at the base of the lithosphere, can explain these: When the continents break apart, the buoyant plume-material flows up into the rift zone, causing excess magmatism by both elevated temperature and excess flux, and magmatism dies off as this rift-restricted material is spent. The buoyancy of the plume-material also elevates the plate boundaries and enhances plate spreading forces initially. The rapid drop in magma productivity to the north correlates with the northern boundary of the wide and deep Cretaceous Vøring Basin, thus less plume material was accommodated off Lofoten. This model predicts that the magma segmentation will show little variation in the geochemical signature.

Top