User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters
NASA Technical Reports Server (NTRS)
Conley, Joseph L.
1992-01-01
The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
The case for regime-based water quality standards
G.C. Poole; J.B. Dunham; D.M. Keenan; S.T. Sauter; D.A. McCullough; C. Mebane; J.C. Lockwood; D.A. Essig; M.P. Hicks; D.J. Sturdevant; E.J. Materna; S.A. Spalding; J. Risley; M. Deppman
2004-01-01
Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type...
ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?
La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim
2014-10-01
Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.
The case for regime-based water quality standards
Poole, Geoffrey C.; Dunham, J.B.; Keenan, D.M.; Sauter, S.T.; McCullough, D.A.; Mebane, Christopher; Lockwood, Jeffrey C.; Essig, Don A.; Hicks, Mark P.; Sturdevant, Debra J.; Materna, E.J.; Spalding, M.; Risley, John; Deppman, Marianne
2004-01-01
Conventional water quality standards have been successful in reducing the concentration of toxic substances in US waters. However, conventional standards are based on simple thresholds and are therefore poorly structured to address human-caused imbalances in dynamic, natural water quality parameters, such as nutrients, sediment, and temperature. A more applicable type of water quality standarda??a a??regime standarda??a??would describe desirable distributions of conditions over space and time within a stream network. By mandating the protection and restoration of the aquatic ecosystem dynamics that are required to support beneficial uses in streams, well-designed regime standards would facilitate more effective strategies for management of natural water quality parameters.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
Impact of operator on determining functional parameters of nuclear medicine procedures.
Mohammed, A M; Naddaf, S Y; Mahdi, F S; Al-Mutawa, Q I; Al-Dossary, H A; Elgazzar, A H
2006-01-01
The study was designed to assess the significance of the interoperator variability in the estimation of functional parameters for four nuclear medicine procedures. Three nuclear medicine technologists with varying years of experience processed the following randomly selected 20 cases with diverse functions of each study type: renography, renal cortical scans, myocardial perfusion gated single-photon emission computed tomography (MP-GSPECT) and gated blood pool ventriculography (GBPV). The technologists used the same standard processing routines and were blinded to the results of each other. The means of the values and the means of differences calculated case by case were statistically analyzed by one-way ANOVA. The values were further analyzed using Pearson correlation. The range of the mean values and standard deviation of relative renal function obtained by the three technologists were 50.65 +/- 3.9 to 50.92 +/- 4.4% for renography, 51.43 +/- 8.4 to 51.55 +/- 8.8% for renal cortical scans, 57.40 +/- 14.3 to 58.30 +/- 14.9% for left ventricular ejection fraction from MP-GSPECT and 54.80 +/- 12.8 to 55.10 +/- 13.1% for GBPV. The difference was not statistically significant, p > 0.9. The values showed a high correlation of more than 0.95. Calculated case by case, the mean of differences +/- SD was found to range from 0.42 +/- 0.36% in renal cortical scans to 1.35 +/- 0.87% in MP-GSPECT with a maximum difference of 4.00%. The difference was not statistically significant, p > 0.19. The estimated functional parameters were reproducible and operator independent as long as the standard processing instructions were followed. Copyright 2006 S. Karger AG, Basel.
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
Reliability of engineering methods of assessment the critical buckling load of steel beams
NASA Astrophysics Data System (ADS)
Rzeszut, Katarzyna; Folta, Wiktor; Garstecki, Andrzej
2018-01-01
In this paper the reliability assessment of buckling resistance of steel beam is presented. A number of parameters such as: the boundary conditions, the section height to width ratio, the thickness and the span are considered. The examples are solved using FEM procedures and formulas proposed in the literature and standards. In the case of the numerical models the following parameters are investigated: support conditions, mesh size, load conditions, steel grade. The numerical results are compared with approximate solutions calculated according to the standard formulas. It was observed that for high slenderness section the deformation of the cross-section had to be described by the following modes: longitudinal and transverse displacement, warping, rotation and distortion of the cross section shape. In this case we face interactive buckling problem. Unfortunately, neither the EN Standards nor the subject literature give close-form formulas to solve these problems. For this reason the reliability of the critical bending moment calculations is discussed.
Stepaniak, Pieter S; Heij, Christiaan; Mannaerts, Guido H H; de Quelerij, Marcel; de Vries, Guus
2009-10-01
Gains in operating room (OR) scheduling may be obtained by using accurate statistical models to predict surgical and procedure times. The 3 main contributions of this article are the following: (i) the validation of Strum's results on the statistical distribution of case durations, including surgeon effects, using OR databases of 2 European hospitals, (ii) the use of expert prior expectations to predict durations of rarely observed cases, and (iii) the application of the proposed methods to predict case durations, with an analysis of the resulting increase in OR efficiency. We retrospectively reviewed all recorded surgical cases of 2 large European teaching hospitals from 2005 to 2008, involving 85,312 cases and 92,099 h in total. Surgical times tended to be skewed and bounded by some minimally required time. We compared the fit of the normal distribution with that of 2- and 3-parameter lognormal distributions for case durations of a range of Current Procedural Terminology (CPT)-anesthesia combinations, including possible surgeon effects. For cases with very few observations, we investigated whether supplementing the data information with surgeons' prior guesses helps to obtain better duration estimates. Finally, we used best fitting duration distributions to simulate the potential efficiency gains in OR scheduling. The 3-parameter lognormal distribution provides the best results for the case durations of CPT-anesthesia (surgeon) combinations, with an acceptable fit for almost 90% of the CPTs when segmented by the factor surgeon. The fit is best for surgical times and somewhat less for total procedure times. Surgeons' prior guesses are helpful for OR management to improve duration estimates of CPTs with very few (<10) observations. Compared with the standard way of case scheduling using the mean of the 3-parameter lognormal distribution for case scheduling reduces the mean overreserved OR time per case up to 11.9 (11.8-12.0) min (55.6%) and the mean underreserved OR time per case up to 16.7 (16.5-16.8) min (53.1%). When scheduling cases using the 4-parameter lognormal model the mean overutilized OR time is up to 20.0 (19.7-20.3) min per OR per day lower than for the standard method and 11.6 (11.3-12.0) min per OR per day lower as compared with the biased corrected mean. OR case scheduling can be improved by using the 3-parameter lognormal model with surgeon effects and by using surgeons' prior guesses for rarely observed CPTs. Using the 3-parameter lognormal model for case-duration prediction and scheduling significantly reduces both the prediction error and OR inefficiency.
The conception of fashion products for children: reflections on safety parameters.
Prete, Lígia Gomes Pereira; Emidio, Lucimar de Fátima Bilmaia; Martins, Suzana Barreto
2012-01-01
The purpose of this study is to reflect on safety requirements for children's clothing, based on the standardization proposed by the ABNT (Technical Standardization Brazilian Association). Bibliographic research and case studies were considered on writing this work. We also discuss the importance of adding other safety requirements to the current standardization, as well as the increasing of the actual age range specified by the ABNT, following the children's clothing safety standardizations in Portugal and the United States, also stated here.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Haag, Nicole
2017-01-01
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
NASA Astrophysics Data System (ADS)
Rees, Sian; Dobre, George
2014-01-01
When using scanning laser ophthalmoscopy to produce images of the eye fundus, maximum permissible exposure (MPE) limits must be considered. These limits are set out in international standards such as the National Standards Institute ANSI Z136.1 Safe Use of Lasers (USA) and BS EN 60825-1: 1994 (UK) and corresponding Euro norms but these documents do not explicitly consider the case of scanned beams. Our study aims to show how MPE values can be calculated for the specific case of retinal scanning by taking into account an array of parameters, such as wavelength, exposure duration, type of scanning, line rate and field size, and how each set of initial parameters results in MPE values that correspond to thermal or photochemical damage to the retina.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
Standardization of pitch-range settings in voice acoustic analysis.
Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C
2009-05-01
Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.
Behaviour of Lyapunov exponents near crisis points in the dissipative standard map
NASA Astrophysics Data System (ADS)
Pompe, B.; Leven, R. W.
1988-11-01
We numerically study the behaviour of the largest Lyapunov characteristic exponent λ1 in dependence on a control parameter in the 2D standard map with dissipation. In order to investigate the system's motion in parameter intervals slightly above crisis points we introduce "partial" Lyapunov exponents which characterize the average exponential divergence of nearby orbits on a semi-attractor at a boundary crisis and on distinct parts of a "large" chaotic attractor near an interior crisis. In the former case we find no significant difference between λ1 in the pre-crisis regime and the partial Lyapunov exponent describing transient chaotic motions slightly above the crisis. For the latter case we give a quantitative description of the drastic increase of λ1. Moreover, a formula which connects the critical exponent of a chaotic transient above a boundary crisis with a pointwise dimension is derived.
An analysis of Indonesia’s information security index: a case study in a public university
NASA Astrophysics Data System (ADS)
Yustanti, W.; Qoiriah, A.; Bisma, R.; Prihanto, A.
2018-01-01
Ministry of Communication and Informatics of the Republic of Indonesia has issued the regulation number 4-2016 about Information Security Management System (ISMS) for all kind organizations. Public university as a government institution must apply this standard to assure its level of information security has complied ISO 27001:2013. This research is a preliminary study to evaluate the readiness of university IT services (case study in a public university) meets the requirement of ISO 27001:2013 using the Indonesia’s Information Security Index (IISI). There are six parameters used to measure the level of information security, these are the ICT role, governance, risk management, framework, asset management and technology. Each parameter consists of serial questions which must be answered and convert to a numeric value. The result shows the level of readiness and maturity to apply ISO 27001 standard.
NASA Astrophysics Data System (ADS)
Goeritno, Arief; Rasiman, Syofyan
2017-06-01
Performance examination of the bulk oil circuit breaker that is influenced by its parameters at the Substation of Bogor Baru (the State Electricity Company = PLN) has been done. It is found that (1) dielectric strength of oil still qualifies as an insulating and cooling medium, because the average value of the measurement result is still above the minimum value allowed, where the minimum limit of 80 kV/2.5 cm or 32 kV/cm; (2) the simultaneity of the CB's contacts is still eligible, so that the BOCB can still be operated, because the difference of time between the highest and lowest values when the BOCB's contacts are opened/closed are less than (Δt<) 10 milliseconds (if meeting the PLN standards as recommended by Alsthom); and (3) the parameter of resistance according to the standards, where (i) the resistance of insulation has a value far above the allowed threshold, while the minimum standards are above 2,000 Mn (if meeting the ANSI standards) or on the value of 2,000 MΩ (if meeting PLN standards), (ii) the resistance of contacts has a value far above the allowed threshold, while the minimum standards are below 350 µΩ (if meeting ANSI standards) or on the value of 200 µΩ (if meeting PLN standards). The resistance of grounding is equal to the maximum limit specified, while the maximum standard is on the value of 0.5 Ω (if meeting PLN standard).
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2005-01-01
The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for…
Top ten models constrained by b {yields} s{gamma}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, J.L.
1994-12-01
The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.
Automatic energy calibration algorithm for an RBS setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala
2013-05-06
This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less
Robust blood-glucose control using Mathematica.
Kovács, Levente; Paláncz, Béla; Benyó, Balázs; Török, László; Benyó, Zoltán
2006-01-01
A robust control design on frequency domain using Mathematica is presented for regularization of glucose level in type I diabetes persons under intensive care. The method originally proposed under Mathematica by Helton and Merino, --now with an improved disturbance rejection constraint inequality--is employed, using a three-state minimal patient model. The robustness of the resulted high-order linear controller is demonstrated by nonlinear closed loop simulation in state-space, in case of standard meal disturbances and is compared with H infinity design implemented with the mu-toolbox of Matlab. The controller designed with model parameters represented the most favorable plant dynamics from the point of view of control purposes, can operate properly even in case of parameter values of the worst-case scenario.
Büttner, Kathrin; Krieter, Joachim
2018-08-01
The analysis of trade networks as well as the spread of diseases within these systems focuses mainly on pure animal movements between farms. However, additional data included as edge weights can complement the informational content of the network analysis. However, the inclusion of edge weights can also alter the outcome of the network analysis. Thus, the aim of the study was to compare unweighted and weighted network analyses of a pork supply chain in Northern Germany and to evaluate the impact on the centrality parameters. Five different weighted network versions were constructed by adding the following edge weights: number of trade contacts, number of delivered livestock, average number of delivered livestock per trade contact, geographical distance and reciprocal geographical distance. Additionally, two different edge weight standardizations were used. The network observed from 2013 to 2014 contained 678 farms which were connected by 1,018 edges. General network characteristics including shortest path structure (e.g. identical shortest paths, shortest path lengths) as well as centrality parameters for each network version were calculated. Furthermore, the targeted and the random removal of farms were performed in order to evaluate the structural changes in the networks. All network versions and edge weight standardizations revealed the same number of shortest paths (1,935). Between 94.4 to 98.9% of the unweighted network and the weighted network versions were identical. Furthermore, depending on the calculated centrality parameters and the edge weight standardization used, it could be shown that the weighted network versions differed from the unweighted network (e.g. for the centrality parameters based on ingoing trade contacts) or did not differ (e.g. for the centrality parameters based on the outgoing trade contacts) with regard to the Spearman Rank Correlation and the targeted removal of farms. The choice of standardization method as well as the inclusion or exclusion of specific farm types (e.g. abattoirs) can alter the results significantly. These facts have to be considered when centrality parameters are to be used for the implementation of prevention and control strategies in the case of an epidemic. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Metrological reliability of optical coherence tomography in biomedical applications
NASA Astrophysics Data System (ADS)
Goloni, C. M.; Temporão, G. P.; Monteiro, E. C.
2013-09-01
Optical coherence tomography (OCT) has been proving to be an efficient diagnostics technique for imaging in vivo tissues, an optical biopsy with important perspectives as a diagnostic tool for quantitative characterization of tissue structures. Despite its established clinical use, there is no international standard to address the specific requirements for basic safety and essential performance of OCT devices for biomedical imaging. The present work studies the parameters necessary for conformity assessment of optoelectronics equipment used in biomedical applications like Laser, Intense Pulsed Light (IPL), and OCT, targeting to identify the potential requirements to be considered in the case of a future development of a particular standard for OCT equipment. In addition to some of the particular requirements standards for laser and IPL, also applicable for metrological reliability analysis of OCT equipment, specific parameters for OCT's evaluation have been identified, considering its biomedical application. For each parameter identified, its information on the accompanying documents and/or its measurement has been recommended. Among the parameters for which the measurement requirement was recommended, including the uncertainty evaluation, the following are highlighted: optical radiation output, axial and transverse resolution, pulse duration and interval, and beam divergence.
Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, J.G.
2011-07-01
A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less
NASA Astrophysics Data System (ADS)
Stare, E.; Beges, G.; Drnovsek, J.
2006-07-01
This paper presents the results of research into the measurement of the resistance of solid isolating materials to tracking. Two types of tracking were investigated: the proof tracking index (PTI) and the comparative tracking index (CTI). Evaluation of the measurement uncertainty in a case study was performed using a test method in accordance with the IEC 60112 standard. In the scope of the tests performed here, this particular test method was used to ensure the safety of electrical appliances. According to the EN ISO/IEC 17025 standard (EN ISO/IEC 17025), in the process of conformity assessment, the evaluation of the measurement uncertainty of the test method should be carried out. In the present article, possible influential parameters that are in accordance with the third and fourth editions of the standard IEC 60112 are discussed. The differences, ambiguities or lack of guidance referring to both editions of the standard are described in the article 'Ambiguities in technical standards—case study IEC 60112—measuring the resistance of solid isolating materials to tracking' (submitted for publication). Several hundred measurements were taken in the present experiments in order to form the basis for the results and conclusions presented. A specific problem of the test (according to the IEC 60112 standard) is the great variety of influential physical parameters (mechanical, electrical, chemical, etc) that can affect the results. At the end of the present article therefore, there is a histogram containing information on the contributions to the measurement uncertainty.
Astrophysical neutrinos flavored with beyond the Standard Model physics
NASA Astrophysics Data System (ADS)
Rasmussen, Rasmus W.; Lechner, Lukas; Ackermann, Markus; Kowalski, Marek; Winter, Walter
2017-10-01
We systematically study the allowed parameter space for the flavor composition of astrophysical neutrinos measured at Earth, including beyond the Standard Model theories at production, during propagation, and at detection. One motivation is to illustrate the discrimination power of the next-generation neutrino telescopes such as IceCube-Gen2. We identify several examples that lead to potential deviations from the standard neutrino mixing expectation such as significant sterile neutrino production at the source, effective operators modifying the neutrino propagation at high energies, dark matter interactions in neutrino propagation, or nonstandard interactions in Earth matter. IceCube-Gen2 can exclude about 90% of the allowed parameter space in these cases, and hence will allow us to efficiently test and discriminate between models. More detailed information can be obtained from additional observables such as the energy dependence of the effect, fraction of electron antineutrinos at the Glashow resonance, or number of tau neutrino events.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Testing the statistical compatibility of independent data sets
NASA Astrophysics Data System (ADS)
Maltoni, M.; Schwetz, T.
2003-08-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.
Cooke, Brian K; Worsham, Elizabeth; Reisfield, Gary M
2017-09-01
In medical negligence cases, the forensic expert must explain to a trier of fact what a defendant physician should have done, or not done, in a specific set of circumstances and whether the physician's conduct constitutes a breach of duty. The parameters of the duty are delineated by the standard of care. Many facets of the standard of care have been well explored in the literature, but gaps remain in a complete understanding of this concept. We examine the standard of care, its origins, and who determines the prevailing standard, beginning with an overview of the historical roots of the standard of care and, using case law, tracing its evolution from the 19th century through the early 21st century. We then analyze the locality rule and consider local, state, and national standards of care. The locality rule requires a defendant physician to provide the same degree of skill and care that is required of a physician practicing in the same or similar community. This rule remains alive in some jurisdictions in the United States. Last, we address the relationship between the standard of care and clinical practice guidelines. © 2017 American Academy of Psychiatry and the Law.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Spinors fields in co-dimension one braneworlds
NASA Astrophysics Data System (ADS)
Mendes, W. M.; Alencar, G.; Landim, R. R.
2018-02-01
In this work we analyze the zero mode localization and resonances of 1/2-spin fermions in co-dimension one Randall-Sundrum braneworld scenarios. We consider delta-like, domain walls and deformed domain walls membranes. Beyond the influence of the spacetime dimension D we also consider three types of couplings: (i) the standard Yukawa coupling with the scalar field and parameter η 1, (ii) a Yukawa-dilaton coupling with two parameters η 2 and λ and (iii) a dilaton derivative coupling with parameter h. Together with the deformation parameter s, we end up with five free parameter to be considered. For the zero mode we find that the localization is dependent of D, because the spinorial representation changes when the bulk dimensionality is odd or even and must be treated separately. For case (i) we find that in odd dimensions only one chirality can be localized and for even dimension a massless Dirac spinor is trapped over the brane. In the cases (ii) and (iii) we find that for some values of the parameters, both chiralities can be localized in odd dimensions and for even dimensions we obtain that the massless Dirac spinor is trapped over the brane. We also calculated numerically resonances for cases (ii) and (iii) by using the transfer matrix method. We find that, for deformed defects, the increasing of D induces a shift in the peaks of resonances. For a given λ with domain walls, we find that the resonances can show up by changing the spacetime dimensionality. For example, the same case in D = 5 do not induces resonances but when we consider D = 10 one peak of resonance is found. Therefore the introduction of more dimensions, diversely from the bosonic case, can change drastically the zero mode and resonances in fermion fields.
Szeleszczuk, Łukasz; Pisklak, Dariusz Maciej; Zielińska-Pisklak, Monika
2018-05-30
Glycine is a common amino acid with relatively complex chemistry in solid state. Although several polymorphs (α, β, δ, γ, ε) of crystalline glycine are known, for NMR spectroscopy the most important is a polymorph, which is used as a standard for calibration of spectrometer performance and therefore it is intensively studied by both experimental methods and theoretical computation. The great scientific interest in a glycine results in a large number of crystallographic information files (CIFs) deposited in Cambridge Structural Database (CSD). The aim of this study was to evaluate the influence of the chosen crystal structure of α glycine obtained in different crystallographic experimental conditions (temperature, pressure and source of radiation of α glycine) on the results of periodic DFT calculation. For this purpose the total of 136 GIPAW calculations of α glycine NMR parameters were performed, preceded by the four approaches ("SP", "only H", "full", "full+cell") of structure preparation. The analysis of the results of those computations performed on the representative group of 34 structures obtained at various experimental conditions revealed that though the structures were generally characterized by good accuracy (R < 0.05 for most of them) the results of the periodic DFT calculations performed using the unoptimized structures differed significantly. The values of the standard deviations of the studied NMR parameters were in most cases decreasing with the number of optimized parameters. The most accurate results (of the calculations) were in most cases obtained using the structures with solely hydrogen atoms positions optimized. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O
2006-03-01
To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.
NASA Astrophysics Data System (ADS)
Landry, Brian R.; Subotnik, Joseph E.
2011-11-01
We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).
Apparent cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.
2017-11-01
Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.
Pineda, F D; Medved, M; Fan, X; Ivancevic, M K; Abe, H; Shimauchi, A; Newstead, G M
2015-01-01
Objective: To compare dynamic contrast-enhanced (DCE) MRI parameters from scans of breast lesions at 1.5 and 3.0 T. Methods: 11 patients underwent paired MRI examinations in both Philips 1.5 and 3.0 T systems (Best, Netherlands) using a standard clinical fat-suppressed, T1 weighted DCE-MRI protocol, with 70–76 s temporal resolution. Signal intensity vs time curves were fit with an empirical mathematical model to obtain semi-quantitative measures of uptake and washout rates as well as time-to-peak enhancement (TTP). Maximum percent enhancement and signal enhancement ratio (SER) were also measured for each lesion. Percent differences between parameters measured at the two field strengths were compared. Results: TTP and SER parameters measured at 1.5 and 3.0 T were similar; with mean absolute differences of 19% and 22%, respectively. Maximum percent signal enhancement was significantly higher at 3 T than at 1.5 T (p = 0.006). Qualitative assessment showed that image quality was significantly higher at 3 T (p = 0.005). Conclusion: Our results suggest that TTP and SER are more robust to field strength change than other measured kinetic parameters, and therefore measurements of these parameters can be more easily standardized than measurements of other parameters derived from DCE-MRI. Semi-quantitative measures of overall kinetic curve shape showed higher reproducibility than do discrete classification of kinetic curve early and delayed phases in a majority of the cases studied. Advances in knowledge: Qualitative measures of curve shape are not consistent across field strength even when acquisition parameters are standardized. Quantitative measures of overall kinetic curve shape, by contrast, have higher reproducibility. PMID:25785918
Besley, Aiken; Vijver, Martina G; Behrens, Paul; Bosker, Thijs
2017-01-15
Microplastics are ubiquitous in the environment, are frequently ingested by organisms, and may potentially cause harm. A range of studies have found significant levels of microplastics in beach sand. However, there is a considerable amount of methodological variability among these studies. Methodological variation currently limits comparisons as there is no standard procedure for sampling or extraction of microplastics. We identify key sampling and extraction procedures across the literature through a detailed review. We find that sampling depth, sampling location, number of repeat extractions, and settling times are the critical parameters of variation. Next, using a case-study we determine whether and to what extent these differences impact study outcomes. By investigating the common practices identified in the literature with the case-study, we provide a standard operating procedure for sampling and extracting microplastics from beach sand. Copyright © 2016 Elsevier Ltd. All rights reserved.
Global transport in a nonautonomous periodic standard map
Calleja, Renato C.; del-Castillo-Negrete, D.; Martinez-del-Rio, D.; ...
2017-04-14
A non-autonomous version of the standard map with a periodic variation of the perturbation parameter is introduced and studied via an autonomous map obtained from the iteration of the nonautonomous map over a period. Symmetry properties in the variables and parameters of the map are found and used to find relations between rotation numbers of invariant sets. The role of the nonautonomous dynamics on period-one orbits, stability and bifurcation is studied. The critical boundaries for the global transport and for the destruction of invariant circles with fixed rotation number are studied in detail using direct computation and a continuation method.more » In the case of global transport, the critical boundary has a particular symmetrical horn shape. Here, the results are contrasted with similar calculations found in the literature.« less
Global transport in a nonautonomous periodic standard map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calleja, Renato C.; del-Castillo-Negrete, D.; Martinez-del-Rio, D.
A non-autonomous version of the standard map with a periodic variation of the perturbation parameter is introduced and studied via an autonomous map obtained from the iteration of the nonautonomous map over a period. Symmetry properties in the variables and parameters of the map are found and used to find relations between rotation numbers of invariant sets. The role of the nonautonomous dynamics on period-one orbits, stability and bifurcation is studied. The critical boundaries for the global transport and for the destruction of invariant circles with fixed rotation number are studied in detail using direct computation and a continuation method.more » In the case of global transport, the critical boundary has a particular symmetrical horn shape. Here, the results are contrasted with similar calculations found in the literature.« less
Kumar, Dinesh; Rai, K N
2017-07-01
In this paper, we investigated the thermal behavior in living biological tissues using time fractional dual-phase-lag bioheat transfer (DPLBHT) model subjected to Dirichelt boundary condition in presence of metabolic and electromagnetic heat sources during thermal therapy. We solved this bioheat transfer model using finite element Legendre wavelet Galerkin method (FELWGM) with help of block pulse function in sense of Caputo fractional order derivative. We compared the obtained results from FELWGM and exact method in a specific case, and found a high accuracy. Results are interpreted in the form of standard and anomalous cases for taking different order of time fractional DPLBHT model. The time to achieve hyperthermia position is discussed in both cases as standard and time fractional order derivative. The success of thermal therapy in the treatment of metastatic cancerous cell depends on time fractional order derivative to precise prediction and control of temperature. The effect of variability of parameters such as time fractional derivative, lagging times, blood perfusion coefficient, metabolic heat source and transmitted power on dimensionless temperature distribution in skin tissue is discussed in detail. The physiological parameters has been estimated, corresponding to the value of fractional order derivative for hyperthermia treatment therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Non-standard neutrino interactions at DUNE
de Gouvea, Andre; Kelly, Kevin J.
2016-03-15
Here, we explore the effects of non-standard neutrino interactions (NSI) and how they modify neutrino propagation in the Deep Underground Neutrino Experiment (DUNE). We find that NSI can significantly modify the data to be collected by the DUNE experiment as long as the new physics parameters are large enough. For example, if the DUNE data are consistent with the standard three-massive-neutrinos paradigm, order 0.1 (in units of the Fermi constant) NSI effects will be ruled out. On the other hand, if large NSI effects are present, DUNE will be able to not only rule out the standard paradigm but alsomore » measure the new physics parameters, sometimes with good precision. We find that, in some cases, DUNE is sensitive to new sources of CP-invariance violation. We also explored whether DUNE data can be used to distinguish different types of new physics beyond nonzero neutrino masses. In more detail, we asked whether NSI can be mimicked, as far as the DUNE setup is concerned, by the hypothesis that there is a new light neutrino state.« less
Parameter estimation in Cox models with missing failure indicators and the OPPERA study.
Brownstein, Naomi C; Cai, Jianwen; Slade, Gary D; Bair, Eric
2015-12-30
In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the "gold standard" for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the "gold standard" examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the "gold standard" examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a "gold standard" examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.
Fermionic extensions of the Standard Model in light of the Higgs couplings
NASA Astrophysics Data System (ADS)
Bizot, Nicolas; Frigerio, Michele
2016-01-01
As the Higgs boson properties settle, the constraints on the Standard Model extensions tighten. We consider all possible new fermions that can couple to the Higgs, inspecting sets of up to four chiral multiplets. We confront them with direct collider searches, electroweak precision tests, and current knowledge of the Higgs couplings. The focus is on scenarios that may depart from the decoupling limit of very large masses and vanishing mixing, as they offer the best prospects for detection. We identify exotic chiral families that may receive a mass from the Higgs only, still in agreement with the hγγ signal strength. A mixing θ between the Standard Model and non-chiral fermions induces order θ 2 deviations in the Higgs couplings. The mixing can be as large as θ ˜ 0 .5 in case of custodial protection of the Z couplings or accidental cancellation in the oblique parameters. We also notice some intriguing effects for much smaller values of θ, especially in the lepton sector. Our survey includes a number of unconventional pairs of vector-like and Majorana fermions coupled through the Higgs, that may induce order one corrections to the Higgs radiative couplings. We single out the regions of parameters where hγγ and hgg are unaffected, while the hγZ signal strength is significantly modified, turning a few times larger than in the Standard Model in two cases. The second run of the LHC will effectively test most of these scenarios.
Nonstandard neutrino interactions in supernovae
NASA Astrophysics Data System (ADS)
Stapleford, Charles J.; Väänänen, Daavid J.; Kneller, James P.; McLaughlin, Gail C.; Shapiro, Brandon T.
2016-11-01
Nonstandard interactions (NSI) of neutrinos with matter can significantly alter neutrino flavor evolution in supernovae with the potential to impact explosion dynamics, nucleosynthesis, and the neutrinos signal. In this paper, we explore, both numerically and analytically, the landscape of neutrino flavor transformation effects in supernovae due to NSI and find a new, heretofore unseen transformation processes can occur. These new transformations can take place with NSI strengths well below current experimental limits. Within a broad swath of NSI parameter space, we observe symmetric and standard matter-neutrino resonances for supernovae neutrinos, a transformation effect previously only seen in compact object merger scenarios; in another region of the parameter space we find the NSI can induce neutrino collective effects in scenarios where none would appear with only the standard case of neutrino oscillation physics; and in a third region the NSI can lead to the disappearance of the high density Mikheyev-Smirnov-Wolfenstein resonance. Using a variety of analytical tools, we are able to describe quantitatively the numerical results allowing us to partition the NSI parameter according to the transformation processes observed. Our results indicate nonstandard interactions of supernova neutrinos provide a sensitive probe of beyond the Standard Model physics complementary to present and future terrestrial experiments.
NASA Astrophysics Data System (ADS)
Gyasi-Agyei, Yeboah
2018-01-01
This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.
SU-F-BRD-10: Lung IMRT Planning Using Standardized Beam Bouquet Templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, L; Wu, Q J.; Yin, F
2014-06-15
Purpose: We investigate the feasibility of choosing from a small set of standardized templates of beam bouquets (i.e., entire beam configuration settings) for lung IMRT planning to improve planning efficiency and quality consistency, and also to facilitate automated planning. Methods: A set of beam bouquet templates is determined by learning from the beam angle settings in 60 clinical lung IMRT plans. A k-medoids cluster analysis method is used to classify the beam angle configuration into clusters. The value of the average silhouette width is used to determine the ideal number of clusters. The beam arrangements in each medoid of themore » resulting clusters are taken as the standardized beam bouquet for the cluster, with the corresponding case taken as the reference case. The resulting set of beam bouquet templates was used to re-plan 20 cases randomly selected from the database and the dosimetric quality of the plans was evaluated against the corresponding clinical plans by a paired t-test. The template for each test case was manually selected by a planner based on the match between the test and reference cases. Results: The dosimetric parameters (mean±S.D. in percentage of prescription dose) of the plans using 6 beam bouquet templates and those of the clinical plans, respectively, and the p-values (in parenthesis) are: lung Dmean: 18.8±7.0, 19.2±7.0 (0.28), esophagus Dmean: 32.0±16.3, 34.4±17.9 (0.01), heart Dmean: 19.2±16.5, 19.4±16.6 (0.74), spinal cord D2%: 47.7±18.8, 52.0±20.3 (0.01), PTV dose homogeneity (D2%-D99%): 17.1±15.4, 20.7±12.2 (0.03).The esophagus Dmean, cord D02 and PTV dose homogeneity are statistically better in the plans using the standardized templates, but the improvements (<5%) may not be clinically significant. The other dosimetric parameters are not statistically different. Conclusion: It's feasible to use a small number of standardized beam bouquet templates (e.g. 6) to generate plans with quality comparable to that of clinical plans. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brezov, D. S.; Mladenova, C. D.; Mladenov, I. M., E-mail: mladenov@bio21.bas.bg
In this paper we obtain the Lie derivatives of the scalar parameters in the generalized Euler decomposition with respect to arbitrary axes under left and right deck transformations. This problem can be directly related to the representation of the angular momentum in quantum mechanics. As a particular example, we calculate the angular momentum and the corresponding quantum hamiltonian in the standard Euler and Bryan representations. Similarly, in the hyperbolic case, the Laplace-Beltrami operator is retrieved for the Iwasawa decomposition. The case of two axes is considered as well.
NASA Astrophysics Data System (ADS)
Enell, Carl-Fredrik; Kozlovsky, Alexander; Turunen, Tauno; Ulich, Thomas; Välitalo, Sirkku; Scotto, Carlo; Pezzopane, Michael
2016-03-01
This paper presents a comparison between standard ionospheric parameters manually and automatically scaled from ionograms recorded at the high-latitude Sodankylä Geophysical Observatory (SGO, ionosonde SO166, 64.1° geomagnetic latitude), located in the vicinity of the auroral oval. The study is based on 2610 ionograms recorded during the period June-December 2013. The automatic scaling was made by means of the Autoscala software. A few typical examples are shown to outline the method, and statistics are presented regarding the differences between manually and automatically scaled values of F2, F1, E and sporadic E (Es) layer parameters. We draw the conclusions that: 1. The F2 parameters scaled by Autoscala, foF2 and M(3000)F2, are reliable. 2. F1 is identified by Autoscala in significantly fewer cases (about 50 %) than in the manual routine, but if identified the values of foF1 are reliable. 3. Autoscala frequently (30 % of the cases) detects an E layer when the manual scaling process does not. When identified by both methods, the Autoscala E-layer parameters are close to those manually scaled, foE agreeing to within 0.4 MHz. 4. Es and parameters of Es identified by Autoscala are in many cases different from those of the manual scaling. Scaling of Es at auroral latitudes is often a difficult task.
Species differences in hematological values of captive cranes, geese, raptors, and quail
Gee, G.F.; Carpenter, J.W.; Hensler, G.L.
1981-01-01
Hematological and serum chemical constituents of blood were determined for 12 species, including 7 endangered species, of cranes, geese, raptors, and quail in captivity at the Patuxent Wildlife Research Center. Means, standard deviations, analysis of variance by species and sex, and a series of multiple comparisons of means were derived for each parameter investigated. Differences among some species means were observed in all blood parameters except gamma-glutamyl transpeptidase. Although sampled during the reproductively quiescent period, an influence of sex was noted in red blood cell count, hemoglobin, albumin, glucose, cholesterol, serum glutamic oxaloacetic transaminase, Ca, and P. Our data and values reported in literature indicate that most hematological parameters vary among species and, in some cases, according to methods used to determine them. Therefore, baseline data for captive and wild birds should be established by using standard methods, and should be made available to aid others for use in assessing physiological and pathological conditions of these species.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Karvounis, Sotirios
2012-12-01
Technology transfer may take place in parallel with cooperative action between companies participating in the same organizational scheme or using one another as subcontractor (outsourcing). In this case, cooperation should be realized by means of Standard Methods and Recommended Practices (SRPs) to achieve (i) quality of intermediate/final products according to specifications and (ii) industrial process control as required to guarantee such quality with minimum deviation (corresponding to maximum reliability) from preset mean values of representative quality parameters. This work deals with the design of the network of SRPs needed in each case for successful cooperation, implying also the corresponding technology transfer, effectuated through a methodological framework developed in the form of an algorithmic procedure with 20 activity stages and 8 decision nodes. The functionality of this methodology is proved by presenting the path leading from (and relating) a standard test method for toluene, as petrochemical feedstock in the toluene diisocyanate production, to the (6 generations distance upstream) performance evaluation of industrial process control systems (ie., from ASTM D5606 to BS EN 61003-1:2004 in the SRPs network).
NASA Astrophysics Data System (ADS)
Bačić, Iva; Malarić, Krešimir; Dumić, Emil
2014-05-01
Mobile users today expect wide range of multimedia services to be available in different mobility scenarios, and among the others is mobile TV service. The Digital Video Broadcasting - Satellite services to Handheld (DVB-SH) is designed to provide mobile TV services, supporting a wide range of mobile multimedia services, like audio and data broadcasting as well as file downloading services. In this paper we present our simulation model for the performance evaluation of the DVB-SH system following the ETSI standard EN 302 583. Simulation model includes complete DVB-SH system, supporting all standardized system modes and parameters. From transmitter to receiver, the information may be sent over different channel models, thus simulating real case scenarios. To the best of authors' knowledge, this is the first complete model of DVB-SH system that includes all standardized system parameters and may be used for examining real DVB-SH communication as well as for educational purposes.
Image-Based 3d Reconstruction and Analysis for Orthodontia
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2012-08-01
Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Athron, P.; Tunstall, Lewis C.; Voigt, A.; Williams, A. G.
2014-09-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as =) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used. Catalogue identifier: ADPM_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154886 No. of bytes in distributed program, including test data, etc.: 1870890 Distribution format: tar.gz Programming language: C++, fortran. Computer: Personal computer. Operating system: Tested on Linux 3.x. Word size: 64 bits Classification: 11.1, 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPM_v3_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 785 Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the next-to-minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as on the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested iterative algorithm and numerical minimisation of the Higgs potential. Reasons for new version: Major extension to include the next-to-minimal supersymmetric standard model. Summary of revisions: Added additional supersymmetric and supersymmetry breaking parameters associated with the additional gauge singlet. Electroweak symmetry breaking conditions are significantly changed in the next-to-minimal mode, and some sparticle mixing changes. An interface to NMSSMTools has also been included. Some of the object structure has also changed, and the command line interface has been made more user friendly. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. Running time: A few seconds per parameter point.
Ninfali, Paolino; Gennari, Lorenzo; Biagiotti, Enrica; Cangi, Francesca; Mattoli, Luisa; Maidecchi, Anna
2009-01-01
Botanical extracts are standardized to > or = 1 marker compounds (MCs). This standardization provides a certain level of quality control, but not complete quality assurance. Thus, industries are looking for other satisfactory systems to improve standardization. This study focuses on the standardization of herbal medicines by combining 2 parameters: the concentration of the MC and antioxidant capacity. Antioxidant capacity was determined with the oxygen radical absorbance capacity (ORAC) method and the concentrations of the MCs, by high-performance liquid chromatography. Total phenols were also determined by the Folin-Ciocolteau method. The ORAC values, expressed as micromol Trolox equivalents/100 g (ORAC %), of 12 commercial herbal extracts were related to the ORAC values of the respective pure MCs at the concentrations at which the MCs occur in products (ORAC-MC %). The ORAC % values of 11 extracts were higher than those of the respective MCs and the ratios ORAC-MC %/ORAC % ranged from 0.007 to 0.7, whereas in the case of Olea europaea leaves, the same ratio was 1.36. The ORAC parameters and their ratios, as well as the linear relationship between ORAC-MC % and ORAC %, are described and discussed as tools for improving the standardization of herbal products and detecting modifications due to herb processing and storage.
Lim, Geok-Hoon; Allen, John Carson; Ng, Ruey Pyng
2017-08-01
Although oncoplastic breast surgery is used to resect larger tumors with lower re-excision rates compared to standard wide local excision (sWLE), criticisms of oncoplastic surgery include a longer-albeit, well concealed-scar, longer operating time and hospital stay, and increased risk of complications. Round block technique has been reported to be very suitable for patients with relatively smaller breasts and minimal ptosis. We aim to determine if round block technique will result in operative parameters comparable with sWLE. Breast cancer patients who underwent a round block procedure from 1st May 2014 to 31st January 2016 were included in the study. These patients were then matched for the type of axillary procedure, on a one to one basis, with breast cancer patients who had undergone sWLE from 1st August 2011 to 31st January 2016. The operative parameters between the 2 groups were compared. 22 patients were included in the study. Patient demographics and histologic parameters were similar in the 2 groups. No complications were reported in either group. The mean operating time was 122 and 114 minutes in the round block and sWLE groups, respectively (P=0.64). Length of stay was similar in the 2 groups (P=0.11). Round block patients had better cosmesis and lower re-excision rates. A higher rate of recurrence was observed in the sWLE group. The round block technique has comparable operative parameters to sWLE with no evidence of increased complications. Lower re-excision rate and better cosmesis were observed in the round block patients suggesting that the round block technique is not only comparable in general, but may have advantages to sWLE in selected cases.
NASA Astrophysics Data System (ADS)
Wu, Puxun; Yu, Hongwei
2007-04-01
Constraints from the Gold sample Type Ia supernova (SN Ia) data, the Supernova Legacy Survey (SNLS) SN Ia data, and the size of the baryonic acoustic oscillation (BAO) peak found in the Sloan Digital Sky Survey (SDSS) on the generalized Chaplygin gas (GCG) model, proposed as a candidate for the unified dark matter-dark energy scenario (UDME), are examined in the cases of both a spatially flat and a spatially curved universe. Our results reveal that the GCG model is consistent with a flat universe up to the 68% confidence level, and the model parameters are within the allowed parameter ranges of the GCG as a candidate for UDME. Meanwhile, we find that in the flat case, both the Gold sample + SDSS BAO data and the SNLS sample + SDSS BAO data break the degeneracy of As and α and allow for the scenario of a cosmological constant plus dark matter (α=0) at the 68% confidence level, although they rule out the standard Chaplygin gas model (α=1) at the 99% confidence level. However, for the case without a flat prior, the SNLS SN Ia + SDSS BAO data do not break the degeneracy between As and α, and they allow for ΛCDM (α=0) and the standard Chaplygin gas model (α=1) at a 68% confidence level, while the Gold SN Ia + SDSS BAO break the degeneracy of As and α and rule out ΛCDM at a 68% confidence level and the standard Chaplygin gas model at a 99% confidence level.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
Analysis of surface integrity of grinded gears using Barkhausen noise analysis and x-ray diffraction
NASA Astrophysics Data System (ADS)
Vrkoslavová, Lucie; Louda, Petr; Malec, Jiři
2014-02-01
The contribution is focused to present results of study grinded gears made of 18CrNiMo7-6 steel used in the wind power plant for support (service) purposes. These gears were case-hardened due to standard hard case and soft core formation. This heat treatment increases wear resistance and fatigue strength of machine parts. During serial production some troubles with surface integrity have occurred. When solving complex problems lots of samples were prepared. For grinding of gears were used different parameters of cutting speed, number of material removal and lots from different subsuppliers. Material characterization was carried out using Barkhausen noise analysis (BNA) device; X-ray diffraction (XRD) measurement of surface residual stresses was done as well. Depth profile of measured characteristics, e.g. magnetoelastic parameter and residual stress was obtained by step by step layers' removing using electrolytic etching. BNA software Viewscan was used to measure magnetizing frequency sweep (MFS) and magnetizing voltage sweep (MVS). Scanning of Magnetoelastic parameter (MP) endwise individual teeth were also carried out with Viewscan. These measurements were done to find problematic surface areas after grinding such as thermal damaged locations. Plots of the hardness and thickness of case-hardened layer on cross sections were measurered as well. Evaluation of structure of subsurface case-hardened layer and core was made on etched metallographic patterns. The aim of performed measurements was to find correlation between conditions of grinding, residual stresses and structural and magnetoelastic parameters. Based on correlation of measured values and technological parameters optimizing the production of gears will be done.
Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David
2017-03-01
The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.
Imprints of a light sterile neutrino at DUNE, T2HK, and T2HKK
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Dutta, Debajyoti; Pramanik, Dipyaman
2017-09-01
We evaluate the impact of sterile neutrino oscillations in the so-called 3 +1 scenario on the proposed long baseline experiment in USA and Japan. There are two proposals for the Japan experiment which are called T2HK and T2HKK. We show the impact of sterile neutrino oscillation parameters on the expected sensitivity of T2HK and T2HKK to mass hierarchy, C P violation and octant of θ23 and compare it against that expected in the case of standard oscillations. We add the expected ten years data from DUNE and present the combined expected sensitivity of T 2 HKK +DUNE to the oscillation parameters. We do a full marginalization over the relevant parameter space and show the effect of the magnitude of the true sterile mixing angles on the physics reach of these experiments. We show that if one assumes that the source of C P violation is the standard C P phase alone in the test case, then it appears that the expected C P violation sensitivity decreases due to sterile neutrinos. However, if we give up this assumption, then the C P sensitivity could go in either direction. The impact on expected octant of θ23 and mass hierarchy sensitivity is shown to depend on the magnitude of the sterile mixing angles in a nontrivial way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
A New Lifetime Distribution with Bathtube and Unimodal Hazard Function
NASA Astrophysics Data System (ADS)
Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.
2008-11-01
In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.
Desensitization to Mycofenolate Mofetil: a novel 12 step protocol.
Smith, M; Gonzalez-Estrada, A; Fernandez, J; Subramanian, A
2016-07-01
The use of MMF has become standard practice in many solid organ transplant recipients due its efficacy and favorable risk profile compared to other immunosuppressants. There has been a single case report of successful MMF desensitization. However, this protocol did not follow current Drug practice parameters. We report a successful desensitization to MMF in a double heart-kidney transplant recipient.
Right-handed neutrino dark matter in a U(1) extension of the Standard Model
NASA Astrophysics Data System (ADS)
Cox, Peter; Han, Chengcheng; Yanagida, Tsutomu T.
2018-01-01
We consider minimal U(1) extensions of the Standard Model in which one of the right-handed neutrinos is charged under the new gauge symmetry and plays the role of dark matter. In particular, we perform a detailed phenomenological study for the case of a U(1)(B‑L)3 flavoured B‑L symmetry. If perturbativity is required up to high-scales, we find an upper bound on the dark matter mass of mχlesssim2 TeV, significantly stronger than that obtained in simplified models. Furthermore, if the U(1)(B‑L)3 breaking scalar has significant mixing with the SM Higgs, there are already strong constraints from direct detection. On the other hand, there remains significant viable parameter space in the case of small mixing, which may be probed in the future via LHC Z' searches and indirect detection. We also comment on more general anomaly-free symmetries consistent with a TeV-scale RH neutrino dark matter candidate, and show that if two heavy RH neutrinos for leptogenesis are also required, one is naturally led to a single-parameter class of U(1) symmetries.
Solberg, E E; Borjesson, M; Sharma, S; Papadakis, M; Wilhelm, M; Drezner, J A; Harmon, K G; Alonso, J M; Heidbuchel, H; Dugmore, D; Panhuyzen-Goedkoop, N M; Mellwig, K-P; Carre, F; Rasmusen, H; Niebauer, J; Behr, E R; Thiene, G; Sheppard, M N; Basso, C; Corrado, D
2016-04-01
There are large variations in the incidence, registration methods and reported causes of sudden cardiac arrest/sudden cardiac death (SCA/SCD) in competitive and recreational athletes. A crucial question is to which degree these variations are genuine or partly due to methodological incongruities. This paper discusses the uncertainties about available data and provides comprehensive suggestions for standard definitions and a guide for uniform registration parameters of SCA/SCD. The parameters include a definition of what constitutes an 'athlete', incidence calculations, enrolment of cases, the importance of gender, ethnicity and age of the athlete, as well as the type and level of sporting activity. A precise instruction for autopsy practice in the case of a SCD of athletes is given, including the role of molecular samples and evaluation of possible doping. Rational decisions about cardiac preparticipation screening and cardiac safety at sport facilities requires increased data quality concerning incidence, aetiology and management of SCA/SCD in sports. Uniform standard registration of SCA/SCD in athletes and leisure sportsmen would be a first step towards this goal. © The European Society of Cardiology 2015.
Bogucki, Sz; Noszczyk-Nowak, A
2017-03-28
Heart rate variability is an established risk factor for mortality in both healthy dogs and animals with heart failure. The aim of this study was to compare short-term heart rate variability (ST-HRV) parameters from 60-min electrocardiograms in dogs with sick sinus syndrome (SSS, n=20) or chronic mitral valve disease (CMVD, n=20) and healthy controls (n=50), and to verify the clinical application of ST-HRV analysis. The study groups differed significantly in terms of both time - and frequency- domain ST-HRV parameters. In the case of dogs with SSS and healthy controls, particularly evident differences pertained to HRV parameters linked directly to the variability of R-R intervals. Lower values of standard deviation of all R-R intervals (SDNN), standard deviation of the averaged R-R intervals for all 5-min segments (SDANN), mean of the standard deviations of all R-R intervals for all 5-min segments (SDNNI) and percentage of successive R-R intervals >50 ms (pNN50) corresponded to a decrease in parasympathetic regulation of heart rate in dogs with CMVD. These findings imply that ST-HRV may be useful for the identification of dogs with SSS and for detection of dysautonomia in animals with CMVD.
A Non-Invasive Assessment of Cardiopulmonary Hemodynamics with MRI in Pulmonary Hypertension
Bane, Octavia; Shah, Sanjiv J.; Cuttica, Michael J.; Collins, Jeremy D.; Selvaraj, Senthil; Chatterjee, Neil R.; Guetter, Christoph; Carr, James C.; Carroll, Timothy J.
2015-01-01
Purpose We propose a method for non-invasive quantification of hemodynamic changes in the pulmonary arteries resulting from pulmonary hypertension (PH). Methods Using a two-element windkessel model, and input parameters derived from standard MRI evaluation of flow, cardiac function and valvular motion, we derive: pulmonary artery compliance (C), mean pulmonary artery pressure (mPAP), pulmonary vascular resistance (PVR), pulmonary capillary wedge pressure (PCWP), time-averaged intra-pulmonary pressure waveforms and pulmonary artery pressures (systolic (sPAP) and diastolic (dPAP)). MRI results were compared directly to reference standard values from right heart catheterization (RHC) obtained in a series of patients with suspected pulmonary hypertension (PH). Results In 7 patients with suspected PH undergoing RHC, MRI and echocardiography, there was no statistically significant difference (p<0.05) between parameters measured by MRI and RHC. Using standard clinical cutoffs to define PH (mPAP ≥ 25 mmHg), MRI was able to correctly identify all patients as having pulmonary hypertension, and to correctly distinguish between pulmonary arterial (mPAP≥ 25 mmHg, PCWP<15 mmHg) and venous hypertension (mPAP ≥ 25 mmHg, PCWP ≥ 15 mmHg) in 5 of 7 cases. Conclusions We have developed a mathematical model capable of quantifying physiological parameters that reflect the severity of PH. PMID:26283577
Filter design for the detection of compact sources based on the Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.
2005-05-01
This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.
Takdastan, Afshin; Mirzabeygi Radfard, Majid; Yousefi, Mahmood; Abbasnia, Abbas; Khodadadia, Rouhollah; Soleimani, Hamed; Mahvi, Amir Hossein; Naghan, Davood Jalili
2018-06-01
According to World Health Organization guidelines, corrosion control is an important aspect of safe drinking-water supplies. Water always includes ingredients, dissolved gases and suspended materials. Although some of these water ingredients is indispensable for human beings, these elements more than permissible limits, could be endanger human health. The aim of this study is to assess physical and chemical parameters of drinking water in the rural areas of Lordegan city, also to determine corrosion indices. This cross-sectional study has carried out with 141 taken samples during 2017 with 13 parameters, which has been analyzed based on standard method and to estimate the water quality indices from groundwater using ANFIS. Also with regard to standard conditions, results of this paper are compared with Environmental Protection Agency and Iran national standards. Five indices, Ryznar Stability Index (RSI), Langlier Saturation Index (LSI), Larson-Skold Index (LS), Puckorius Scaling Index (PSI), and Aggressive Index (AI) programmed by using Microsoft Excel software. Owing to its simplicity, the program, can easily be used by researchers and operators. Parameters included Sulfate, Sodium, Chloride, and Electrical Conductivity respectively were 13.5, 28, 10.5, and 15% more than standard level. The amount of Nitrate, in 98% of cases were in permissible limits and about 2% were more than standard level. Result of presented research indicate that water is corrosive at 10.6%,89.4%,87.2%,59.6% and 14.9% of drinking water supply reservoirs, according to LSI, RSI, PSI, LS and AI, respectively.
Consideration of rainwater quality parameters for drinking purposes: A case study in rural Vietnam.
Lee, Minju; Kim, Mikyeong; Kim, Yonghwan; Han, Mooyoung
2017-09-15
Rainwater, which is used for drinking purposes near Hanoi, Vietnam, was analysed for water quality based on 1.5 years of monitoring data. In total, 23 samples were collected from different points within two rainwater harvesting systems (RWHSs). Most parameters met the standard except micro-organisms. Coliform and Escherichia coli (E. coli) were detected when the rainwater was not treated with ultraviolet (UV) light; however, analysis of rainwater after UV sterilisation showed no trace of micro-organisms. The RWHSs appear to provide drinking water of relatively good quality compared with surface water and groundwater. The superior quality of the rainwater suggests the necessity for new drinking rainwater standards because applying all of the drinking water quality standards to rainwater is highly inefficient. The traditionally implemented standards could cause more difficulties for developing countries using RWHSs installed decentralized as a source of drinking water, particularly in areas not well supplied with testing equipment, because such countries must bear the expense and time for these measures. This paper proposes the necessity of rainwater quality guideline, which could serve as a safe and cost-effective alternative to provide an access to safe drinking water. Copyright © 2017 Elsevier Ltd. All rights reserved.
Taming Many-Parameter BSM Models with Bayesian Neural Networks
NASA Astrophysics Data System (ADS)
Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.
2017-09-01
The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.
NASA Astrophysics Data System (ADS)
Masian, Y.; Sivak, A.; Sevostianov, D.; Vassiliev, V.; Velichansky, V.
The paper shows the presents results of studies of small-size rubidium cells with argon and neon buffer gases, produced by a patent pended technique of laser welding [Fishman et al. (2014)]. Cells were designed for miniature frequency standard. Temperature dependence of the frequency of the coherent population trapping (CPT) resonance was measured and used to optimize the ratio of partial pressures of buffer gases. The influence of duration and regime of annealing on the CPT-resonance frequency drift was investigated. The parameters of the FM modulation of laser current for two cases which correspond to the highest amplitude of CPT resonance and to the smallest light shifts of the resonance frequency were determined. The temperature dependences of the CPT resonance frequency were found to be surprisingly different in the two cases. A non-linear dependence of CPT resonance frequency on the temperature of the cell with the two extremes was revealed for one of these cases.
Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-10-01
A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.
Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction
NASA Astrophysics Data System (ADS)
Fukushima, H.; Toratani, M.
1997-07-01
The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.
Lim, Changwon
2015-03-30
Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Gatlin, L. L.
1974-01-01
Concepts of information theory are applied to examine various proteins in terms of their redundancy in natural originators such as animals and plants. The Monte Carlo method is used to derive information parameters for random protein sequences. Real protein sequence parameters are compared with the standard parameters of protein sequences having a specific length. The tendency of a chain to contain some amino acids more frequently than others and the tendency of a chain to contain certain amino acid pairs more frequently than other pairs are used as randomness measures of individual protein sequences. Non-periodic proteins are generally found to have random Shannon redundancies except in cases of constraints due to short chain length and genetic codes. Redundant characteristics of highly periodic proteins are discussed. A degree of periodicity parameter is derived.
Hourly global and diffuse radiation of Lagos, Nigeria-correlation with some atmospheric parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chendo, M.A.C.; Maduekwe, A.A.L.
1994-03-01
The influence of four climatic parameters on the hourly diffuse fraction in Lagos, Nigeria, has been studied. Using data for two years, new correlations were established. The standard error of the Liu and Jordan-type equation was reduced by 12.83% when solar elevation, ambient temperature, and relative humidity were used together as predictor variables for the entire data set. Ambient temperature and relative humidity proved to be very important variables for predicting the diffuse fraction of the solar radiation passing through the humid atmosphere of the coastal and tropic city of Lagos. Seasonal analysis carried out with the data showed improvementsmore » on the standard errors for the new seasonal correlations. In the case of the dry season, the improvement was 18.37%, whole for the wet season, this was 12.37%. Comparison with existing correlations showed that the performance of the one parameter model (namely K[sub t]), of Orgill and Hollands and Reindl, Beckman, and Duffie were very different from the Liu and Jordan-type model obtained for Lagos.« less
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
Escrihuela, F. J.; Forero, D. V.; Miranda, O. G.; ...
2017-09-08
When neutrino masses arise from the exchange of neutral heavy leptons, as in most seesaw schemes, the effective lepton mixing matrix N describing neutrino propagation is non-unitary, hence neutrinos are not exactly orthonormal. New CP violation phases appear in N that could be confused with the standard phasemore » $${\\delta }_{\\mathrm{CP}}$$ characterizing the three neutrino paradigm.We study the potential of the long-baseline neutrino experiment DUNE in probing CP violation induced by the standard CP phase in the presence of non-unitarity. In order to accomplish this we develop our previous formalism, so as to take into account the neutrino interactions with the medium, important in long baseline experiments such as DUNE. In this study we find that the expected CP sensitivity of DUNE is somewhat degraded with respect to that characterizing the standard unitary case. However the effect is weaker than might have been expected thanks mainly to the wide neutrino beam. We also investigate the sensitivity of DUNE to the parameters characterizing non-unitarity. In this case we find that there is no improvement expected with respect to the current situation, unless the near detector setup is revamped.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escrihuela, F. J.; Forero, D. V.; Miranda, O. G.
When neutrino masses arise from the exchange of neutral heavy leptons, as in most seesaw schemes, the effective lepton mixing matrix N describing neutrino propagation is non-unitary, hence neutrinos are not exactly orthonormal. New CP violation phases appear in N that could be confused with the standard phasemore » $${\\delta }_{\\mathrm{CP}}$$ characterizing the three neutrino paradigm.We study the potential of the long-baseline neutrino experiment DUNE in probing CP violation induced by the standard CP phase in the presence of non-unitarity. In order to accomplish this we develop our previous formalism, so as to take into account the neutrino interactions with the medium, important in long baseline experiments such as DUNE. In this study we find that the expected CP sensitivity of DUNE is somewhat degraded with respect to that characterizing the standard unitary case. However the effect is weaker than might have been expected thanks mainly to the wide neutrino beam. We also investigate the sensitivity of DUNE to the parameters characterizing non-unitarity. In this case we find that there is no improvement expected with respect to the current situation, unless the near detector setup is revamped.« less
Artifactual ECG changes induced by electrocautery in a patient with coronary artery disease.
Naik, B Naveen; Luthra, Ankur; Dwivedi, Ashish; Jafra, Anudeep
Continuous monitoring of 5-lead electrocardiogram is a basic standard of care (included under standard ASA monitor) in the operating room and electrocautery interference is a common phenomenon. Clinical signs, along with monitored waveforms from other simultaneously monitored parameters may provide us clues to differentiate artifacts from true changes on the electrocardiogram. An improved understanding of the artifacts generated by electrocautery and their identifying characteristics is important to avoid misinterpretation, misdiagnosis, and hence mismanagement. This case report highlights the artifacts in electrocardiogram induced by electrocautery. Copyright © 2017 Elsevier Inc. All rights reserved.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj
2018-01-01
Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Descriptive cross-sectional hospital-based study. This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS -Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups.
Stop co-annihilation in the minimal supersymmetric standard model revisited
NASA Astrophysics Data System (ADS)
Pierce, Aaron; Shah, Nausheen R.; Vogl, Stefan
2018-01-01
We reexamine the stop co-annihilation scenario of the minimal supersymmetric standard model, wherein a binolike lightest supersymmetric particle has a thermal relic density set by co-annihilations with a scalar partner of the top quark in the early universe. We concentrate on the case where only the top partner sector is relevant for the cosmology, and other particles are heavy. We discuss the cosmology with focus on low energy parameters and an emphasis on the implications of the measured Higgs boson mass and its properties. We find that the irreducible direct detection signal correlated with this cosmology is generically well below projected experimental sensitivity, and in most cases lies below the neutrino background. A larger, detectable, direct detection rate is possible, but is unrelated to the co-annihilation cosmology. LHC searches for compressed spectra are crucial for probing this scenario.
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations
Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita
2017-07-20
This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).
Air Pollution and Quality of Sperm: A Meta-Analysis
Fathi Najafi, Tahereh; Latifnejad Roudsari, Robab; Namvar, Farideh; Ghavami Ghanbarabadi, Vahid; Hadizadeh Talasaz, Zahra; Esmaeli, Mahin
2015-01-01
Context: Air pollution is common in all countries and affects reproductive functions in men and women. It particularly impacts sperm parameters in men. This meta-analysis aimed to examine the impact of air pollution on the quality of sperm. Evidence Acquisition: The scientific databases of Medline, PubMed, Scopus, Google scholar, Cochrane Library, and Elsevier were searched to identify relevant articles published between 1978 to 2013. In the first step, 76 articles were selected. These studies were ecological correlation, cohort, retrospective, cross-sectional, and case control ones that were found through electronic and hand search of references about air pollution and male infertility. The outcome measurement was the change in sperm parameters. A total of 11 articles were ultimately included in a meta-analysis to examine the impact of air pollution on sperm parameters. The authors applied meta-analysis sheets from Cochrane library, then data extraction, including mean and standard deviation of sperm parameters were calculated and finally their confidence interval (CI) were compared to CI of standard parameters. Results: The CI for pooled means were as follows: 2.68 ± 0.32 for ejaculation volume (mL), 62.1 ± 15.88 for sperm concentration (million per milliliter), 39.4 ± 5.52 for sperm motility (%), 23.91 ± 13.43 for sperm morphology (%) and 49.53 ± 11.08 for sperm count. Conclusions: The results of this meta-analysis showed that air pollution reduces sperm motility, but has no impact on the other sperm parameters of spermogram. PMID:26023349
Air pollution and quality of sperm: a meta-analysis.
Fathi Najafi, Tahereh; Latifnejad Roudsari, Robab; Namvar, Farideh; Ghavami Ghanbarabadi, Vahid; Hadizadeh Talasaz, Zahra; Esmaeli, Mahin
2015-04-01
Air pollution is common in all countries and affects reproductive functions in men and women. It particularly impacts sperm parameters in men. This meta-analysis aimed to examine the impact of air pollution on the quality of sperm. The scientific databases of Medline, PubMed, Scopus, Google scholar, Cochrane Library, and Elsevier were searched to identify relevant articles published between 1978 to 2013. In the first step, 76 articles were selected. These studies were ecological correlation, cohort, retrospective, cross-sectional, and case control ones that were found through electronic and hand search of references about air pollution and male infertility. The outcome measurement was the change in sperm parameters. A total of 11 articles were ultimately included in a meta-analysis to examine the impact of air pollution on sperm parameters. The authors applied meta-analysis sheets from Cochrane library, then data extraction, including mean and standard deviation of sperm parameters were calculated and finally their confidence interval (CI) were compared to CI of standard parameters. The CI for pooled means were as follows: 2.68 ± 0.32 for ejaculation volume (mL), 62.1 ± 15.88 for sperm concentration (million per milliliter), 39.4 ± 5.52 for sperm motility (%), 23.91 ± 13.43 for sperm morphology (%) and 49.53 ± 11.08 for sperm count. The results of this meta-analysis showed that air pollution reduces sperm motility, but has no impact on the other sperm parameters of spermogram.
Waschke, Albrecht; Arefian, Habibollah; Walter, Jan; Hartmann, Michael; Maschmann, Jens; Kalff, Rolf
2018-06-01
Concomitant radiochemotherapy followed by six cycles of temozolomide (= short term) is considered as standard therapy for adults with newly diagnosed glioblastoma. In contrast, open-end administration of temozolomide until progression (= long-term) is proposed by some authors as a viable alternative. We aimed to determine the cost-effectiveness of long-term temozolomide therapy for patients newly diagnosed with glioblastoma compared to standard therapy. A Markov model was constructed to compare medical costs and clinical outcomes for both therapy types over a time horizon of 60 months. Transition probabilities for standard therapy were calculated from randomized controlled trial data by Stupp et al. The data for long-term temozolomide therapy was collected by matching a cohort treated in the Department of Neurosurgery at Jena University Hospital. Health utilities were obtained from a previous cost utility study. The cost perspective was based on health insurance. The base case analysis showed a median overall survival of 17.1 months and a median progression-free survival of 7.4 months for patients in the long-term temozolomide therapy arm. The cost-effectiveness analysis using all base case parameters in a time-dependent Markov model resulted in an incremental effectiveness of 0.022 quality-adjusted life-years (QALYs). The incremental cost-effectiveness ratio (ICER) was €351,909/QALY. Sensitivity analyses showed that parameters with the most influence on ICER were the health state utility of progression in both therapy arms. Although open-ended temozolomide therapy is very expensive, the ICER of this therapy is comparable to that of the standard temozolomide therapy for patients newly diagnosed with glioblastoma.
Deng, Nina; Anatchkova, Milena D; Waring, Molly E; Han, Kyung T; Ware, John E
2015-08-01
The Quality-of-life (QOL) Disease Impact Scale (QDIS(®)) standardizes the content and scoring of QOL impact attributed to different diseases using item response theory (IRT). This study examined the IRT invariance of the QDIS-standardized IRT parameters in an independent sample. The differential functioning of items and test (DFIT) of a static short-form (QDIS-7) was examined across two independent sources: patients hospitalized for acute coronary syndrome (ACS) in the TRACE-CORE study (N = 1,544) and chronically ill US adults in the QDIS standardization sample. "ACS-specific" IRT item parameters were calibrated and linearly transformed to compare to "standardized" IRT item parameters. Differences in IRT model-expected item, scale and theta scores were examined. The DFIT results were also compared in a standard logistic regression differential item functioning analysis. Item parameters estimated in the ACS sample showed lower discrimination parameters than the standardized discrimination parameters, but only small differences were found for thresholds parameters. In DFIT, results on the non-compensatory differential item functioning index (range 0.005-0.074) were all below the threshold of 0.096. Item differences were further canceled out at the scale level. IRT-based theta scores for ACS patients using standardized and ACS-specific item parameters were highly correlated (r = 0.995, root-mean-square difference = 0.09). Using standardized item parameters, ACS patients scored one-half standard deviation higher (indicating greater QOL impact) compared to chronically ill adults in the standardization sample. The study showed sufficient IRT invariance to warrant the use of standardized IRT scoring of QDIS-7 for studies comparing the QOL impact attributed to acute coronary disease and other chronic conditions.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Generalized Ince Gaussian beams
NASA Astrophysics Data System (ADS)
Bandres, Miguel A.; Gutiérrez-Vega, Julio C.
2006-08-01
In this work we present a detailed analysis of the tree families of generalized Gaussian beams, which are the generalized Hermite, Laguerre, and Ince Gaussian beams. The generalized Gaussian beams are not the solution of a Hermitian operator at an arbitrary z plane. We derived the adjoint operator and the adjoint eigenfunctions. Each family of generalized Gaussian beams forms a complete biorthonormal set with their adjoint eigenfunctions, therefore, any paraxial field can be described as a superposition of a generalized family with the appropriate weighting and phase factors. Each family of generalized Gaussian beams includes the standard and elegant corresponding families as particular cases when the parameters of the generalized families are chosen properly. The generalized Hermite Gaussian and Laguerre Gaussian beams correspond to limiting cases of the generalized Ince Gaussian beams when the ellipticity parameter of the latter tends to infinity or to zero, respectively. The expansion formulas among the three generalized families and their Fourier transforms are also presented.
NASA Technical Reports Server (NTRS)
Browning, G. L.; Holzer, T. E.
1992-01-01
The paper derives the 'reduced' system of equations commonly used to describe the time evolution of the polar wind and multiconstituent stellar winds from the equations for a multispecies plasma with known temperature profiles by assuming that the electron thermal speed approaches infinity. The reduced system is proved to have unbounded growth near the sonic point of the protons for many of the standard parameter cases. For the same parameter cases, the unmodified system exhibits growth in some of the Fourier modes, but this growth is bounded. An alternate system (the 'approximate' system) in which the electron thermal speed is slowed down is introduced. The approximate system retains the mathematical behavior of the unmodified system and can be shown to accurately describe the smooth solutions of the unmodified system. Other advantages of the approximate system over the reduced system are discussed.
Yang, Jin-Young; Lee, Eun-Sook; Kim, Se-Chul; Cha, So-Yang; Kim, Sung-Tek; Lee, Man-Ho; Han, Sun-Hee; Park, Young-Sang
2013-01-01
From May to June 2012, a waterborne outbreak of 124 cases of cryptosporidiosis occurred in the plumbing systems of an older high-rise apartment complex in Seoul, Republic of Korea. The residents of this apartment complex had symptoms of watery diarrhea and vomiting. Tap water samples in the apartment complex and its adjacent buildings were collected and tested for 57 parameters under the Korean Drinking Water Standards and for additional 11 microbiological parameters. The microbiological parameters included total colony counts, Clostridium perfringens, Enterococcus, fecal streptococcus, Salmonella, Shigella, Pseudomonas aeruginosa, Cryptosporidium oocysts, Giardia cysts, total culturable viruses, and Norovirus. While the tap water samples of the adjacent buildings complied with the Korean Drinking Water Standards for all parameters, fecal bacteria and Cryptosporidium oocysts were detected in the tap water samples of the outbreak apartment complex. It turned out that the agent of the disease was Cryptosporidium parvum. The drinking water was polluted with sewage from a septic tank in the apartment complex. To remove C. parvum oocysts, we conducted physical processes of cleaning the water storage tanks, flushing the indoor pipes, and replacing old pipes with new ones. Finally we restored the clean drinking water to the apartment complex after identification of no oocysts. PMID:24039290
Cho, Eun-Joo; Yang, Jin-Young; Lee, Eun-Sook; Kim, Se-Chul; Cha, So-Yang; Kim, Sung-Tek; Lee, Man-Ho; Han, Sun-Hee; Park, Young-Sang
2013-08-01
From May to June 2012, a waterborne outbreak of 124 cases of cryptosporidiosis occurred in the plumbing systems of an older high-rise apartment complex in Seoul, Republic of Korea. The residents of this apartment complex had symptoms of watery diarrhea and vomiting. Tap water samples in the apartment complex and its adjacent buildings were collected and tested for 57 parameters under the Korean Drinking Water Standards and for additional 11 microbiological parameters. The microbiological parameters included total colony counts, Clostridium perfringens, Enterococcus, fecal streptococcus, Salmonella, Shigella, Pseudomonas aeruginosa, Cryptosporidium oocysts, Giardia cysts, total culturable viruses, and Norovirus. While the tap water samples of the adjacent buildings complied with the Korean Drinking Water Standards for all parameters, fecal bacteria and Cryptosporidium oocysts were detected in the tap water samples of the outbreak apartment complex. It turned out that the agent of the disease was Cryptosporidium parvum. The drinking water was polluted with sewage from a septic tank in the apartment complex. To remove C. parvum oocysts, we conducted physical processes of cleaning the water storage tanks, flushing the indoor pipes, and replacing old pipes with new ones. Finally we restored the clean drinking water to the apartment complex after identification of no oocysts.
NASA Astrophysics Data System (ADS)
Chatzidimitriou-Dreismann, C. A.; Gray, E. MacA.; Blach, T. P.
2012-06-01
The "standard" procedure for calibrating the Vesuvio eV neutron spectrometer at the ISIS neutron source, forming the basis for data analysis over at least the last decade, was recently documented in considerable detail by the instrument's scientists. Additionally, we recently derived analytic expressions of the sensitivity of recoil peak positions with respect to fight-path parameters and presented neutron-proton scattering results that together called into question the validity of the "standard" calibration. These investigations should contribute significantly to the assessment of the experimental results obtained with Vesuvio. Here we present new results of neutron-deuteron scattering from D2 in the backscattering angular range (θ>90°) which are accompanied by a striking energy increase that violates the Impulse Approximation, thus leading unequivocally the following dilemma: (A) either the "standard" calibration is correct and then the experimental results represent a novel quantum dynamical effect of D which stands in blatant contradiction of conventional theoretical expectations; (B) or the present "standard" calibration procedure is seriously deficient and leads to artificial outcomes. For Case (A), we allude to the topic of attosecond quantum dynamical phenomena and our recent neutron scattering experiments from H2 molecules. For Case (B), some suggestions as to how the "standard" calibration could be considerably improved are made.
On Madelung systems in nonlinear optics: A reciprocal invariance
NASA Astrophysics Data System (ADS)
Rogers, Colin; Malomed, Boris
2018-05-01
The role of the de Broglie-Bohm potential, originally established as central to Bohmian quantum mechanics, is examined for two canonical Madelung systems in nonlinear optics. In a seminal case, a Madelung system derived by Wagner et al. via the paraxial approximation and in which the de Broglie-Bohm potential is present is shown to admit a multi-parameter class of what are here introduced as "q-gaussons." In the limit, as the Tsallis parameter q → 1, the q-gaussons are shown to lead to standard gausson solitons, as admitted by the logarithmic nonlinear Schrödinger equation encapsulating the Madelung system. The q-gaussons are obtained for optical media with dual power-law refractive index. In the second case, a Madelung system originally derived via an eikonal approximation in the context of laser beam propagation and in which the de Broglie Bohm term is neglected is shown to admit invariance under a novel class of two-parameter class of reciprocal transformations. Model optical laws analogous to the celebrated Kármán-Tsien law of classical gas dynamics are introduced.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Nuel, Grégory
2006-01-01
Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916
Pattern statistics on Markov chains and sensitivity to parameter estimation.
Nuel, Grégory
2006-10-17
In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
NASA Astrophysics Data System (ADS)
Shock, Everetr L.; Koretsky, Carla M.
1995-04-01
Regression of standard state equilibrium constants with the revised Helgeson-Kirkham-Flowers (HKF) equation of state allows evaluation of standard partial molal entropies ( overlineSo) of aqueous metal-organic complexes involving monovalent organic acid ligands. These values of overlineSo provide the basis for correlations that can be used, together with correlation algorithms among standard partial molal properties of aqueous complexes and equation-of-state parameters, to estimate thermodynamic properties including equilibrium constants for complexes between aqueous metals and several monovalent organic acid ligands at the elevated pressures and temperatures of many geochemical processes which involve aqueous solutions. Data, parameters, and estimates are given for 270 formate, propanoate, n-butanoate, n-pentanoate, glycolate, lactate, glycinate, and alanate complexes, and a consistent algorithm is provided for making other estimates. Standard partial molal entropies of association ( Δ -Sro) for metal-monovalent organic acid ligand complexes fall into at least two groups dependent upon the type of functional groups present in the ligand. It is shown that isothermal correlations among equilibrium constants for complex formation are consistent with one another and with similar correlations for inorganic metal-ligand complexes. Additional correlations allow estimates of standard partial molal Gibbs free energies of association at 25°C and 1 bar which can be used in cases where no experimentally derived values are available.
Standardizing therapeutic parameters of acupuncture for pain suppression in rats: preliminary study.
Yeo, Sujung; Lim, Hyungtaeck; Choe, Ilwhan; Kim, Sung-Hoon; Lim, Sabina
2014-01-15
Despite acupuncture's wide and successful use, it is still considered as lacking scientifically rigorous evidence, especially with respect to its effectiveness. To address this problem, it is necessary to re-examine the practice of acupuncture using scientific methodology. The standardization of acupuncture practices may offer a solution. As a preliminary step towards the standardization of acupuncture stimulation in animal experiments, this study attempted to clarify the various therapeutic parameters that contribute to acupuncture's efficacy. This study identified specific acupoints, temporal point of needling, rotation of the needle, duration of acupuncture, and diameter of the needle as the parameters, through formalin test. In this test, acupuncture was performed on either the ST36 or LR2 point immediately after pain induction and 5 minutes after pain induction. The formalin test yielded no significant suppression of pain in the case of ST36 and LR2 acupuncture stimulation immediately following pain induction. When acupuncture was applied 5 minutes after pain induction, however, the ST36 stimulation resulted in a significant decrease in pain, while the LR2 stimulation produced no change. The duration of acupuncture, but not the diameter of the needle, was also significant. As for the rotation of the needle, there was no significant difference in the pain reduction achieved in the rotation and non-rotation groups. We determined that specific acupoint, temporal point of needling, and duration of treatment are important factors in the inhibition of pain. These finding strongly suggest that in animal experiments, the application of a set of appropriate therapeutic parameters can significantly influence the outcome.
Flow Control and Measurement in Electric Propulsion Systems: Towards an AIAA Reference Standard
2013-10-01
the spacecraft sensors, although some improvement can be made by averaging several measurements together. 3. Thermal Mass Gauging Thermal Mass...flow controllers (MFCs) to measure and control propellant into EP devices. To determine several key thruster performance parameters with a low level...the specified time interval may not be known. A first recourse is to perform several measurements and examine the linearity. In cases where the
NASA Technical Reports Server (NTRS)
Porter, R. L.; Ferland, G. J.; Kraemer, S. B.; Armentrout, B. K.; Arnaud, K. A.; Turner, T. J.
2007-01-01
We discuss new functionality of the spectral simulation code CLOUDY which allows the user to calculate grids with one or more initial parameters varied and formats the predicted spectra in the standard FITS format. These files can then be imported into the x-ray spectral analysis software XSPEC and used as theoretical models for observations. We present and verify a test case. Finally, we consider a few observations and discuss our results.
Drop size distribution comparisons between Parsivel and 2-D video disdrometers
NASA Astrophysics Data System (ADS)
Thurai, M.; Petersen, W. A.; Tokay, A.; Schultz, C.; Gatlin, P.
2011-05-01
Measurements from a 2-D video disdrometer (2DVD) have been used for drop size distribution (DSD) comparisons with co-located Parsivel measurements in Huntsville, Alabama. The comparisons were made in terms of the mass-weighted mean diameter, Dm, the standard deviation of the mass-spectrum, σm, and the rainfall rate, R, all based on 1-min DSD from the two instruments. Time series comparisons show close agreement in all three parameters for cases where R was less than 20 mm h-1. In four cases, discrepancies in all three parameters were seen for "heavy" events, with the Parsivel showing higher Dm, σm and R, when R reached high values (particularly above 30 mm h-1). Possible causes for the discrepancies include the presence of a small percentage of non-fully melted hydrometers, with higher than expected fall velocity and with very different axis ratios as compared with rain, indicating small hail or ice pellets or graupel. We also present here Parsivel-to-Parsivel comparisons as well as comparisons between two 2DVD instruments, namely a low-profile unit and the latest generation, "compact unit" which was installed at the same site in November 2009. The comparisons are included to assess the variability between the same types of instrument. Correlation coefficients and the fractional standard errors are compared.
Original and creative stereoscopic film making
NASA Astrophysics Data System (ADS)
Criado, Enrique
2008-02-01
The stereoscopic cinema has become, once again, a hot topic in the film production. For filmmakers to be successful in this field, a technical background in the principles of binocular perception and how our brain interprets the incoming data from our eyes, are fundamental. It is also paramount for a stereoscopic production to adhere certain rules for comfort and safety. There is an immense variety of options in the art of standard "flat" photography, and the possibilities only can be multiply with the stereo. The stereoscopic imaging has its own unique areas for subjective, original and creative control that allow an incredible range of possible combinations by working inside the standards, and in some cases on the boundaries of the basic stereo rules. The stereoscopic imaging can be approached in a "flat" manner, like channeling sound through an audio equalizer with all the bands at the same level. It can provide a realistic perception, which in many cases can be sufficient, thanks to the rock-solid viewing inherent to the stereoscopic image, but there are many more possibilities. This document describes some of the basic operating parameters and concepts for stereoscopic imaging, but it also offers ideas for a creative process based on the variation and combination of these basic parameters, which can lead into a truly innovative and original viewing experience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyflot, MJ; Yang, F; Byrd, D
Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850,more » 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials which incorporate textural features are properly designed to detect clinical endpoints. Supported by NIH grants R01 CA169072, U01 CA148131, NCI Contract (SAIC-Frederick) 24XS036-004, and a research contract from GE Healthcare.« less
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Generating functions for weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Guay-Paquet, Mathieu; Harnad, J.
2017-08-01
Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.
On the traceability of gaseous reference materials
NASA Astrophysics Data System (ADS)
Brown, Richard J. C.; Brewer, Paul J.; Harris, Peter M.; Davidson, Stuart; van der Veen, Adriaan M. H.; Ent, Hugo
2017-06-01
The complex and multi-parameter nature of chemical composition measurement means that establishing traceability is a challenging task. As a result incorrect interpretations about the origin of the metrological traceability of chemical measurement results can occur. This discussion paper examines why this is the case by scrutinising the peculiarities of the gas metrology area. It considers in particular: primary methods, dissemination of metrological traceability and the role of documentary standards and accreditation bodies in promulgating best practice. There is also a discussion of documentary standards relevant to the NMI and reference material producer community which need clarification, and the impact which key stakeholders in the quality infrastructure can bring to these issues.
Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj
2018-01-01
Background: Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. Aims: To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Settings and Designs: Descriptive cross-sectional hospital-based study. Materials and Methods: This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS –Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Results: Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Conclusion: Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups. PMID:29403169
Critical phenomena on k -booklets
NASA Astrophysics Data System (ADS)
Grassberger, Peter
2017-01-01
We define a "k -booklet" to be a set of k semi-infinite planes with -∞
NASA Astrophysics Data System (ADS)
He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian
2010-12-01
In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.
[Indicators of healthcare quality in day surgery (2010-2012)].
Martínez Rodenas, F; Codina Grifell, J; Deulofeu Quintana, P; Garrido Corchón, J; Blasco Casares, F; Gibanel Garanto, X; Cuixart Vilamajó, L; de Haro Licer, J; Vazquez Dorrego, X
2014-01-01
Monitoring quality indicators in Ambulatory Surgery centers is fundamental in order to identify problems, correct them and prevent them. Given their large number, it is essential to select the most valid ones. The objectives of the study are the continuous improvement in the quality of healthcare of day-case surgery in our center, by monitoring selective quality parameters, having periodic information on the results and taking corrective measures, as well as achieving a percentage of unplanned transfer and cancellations within quality standards. Prospective, observational and descriptive study of the day-case surgery carried out from January 2010 to December 2012. Unplanned hospital admissions and cancellations on the same day of the operation were selected and monitored, along with their reasons. Hospital admissions were classified as: inappropriate selection, medical-surgical complications, and others. The results were evaluated each year and statistically analysed using χ(2) tests. A total of 8,300 patients underwent day surgery during the 3 years studied. The day-case surgery and outpatient index increased by 5.4 and 6.4%, respectively (P<.01). Unexpected hospital admissions gradually decreased due to the lower number of complications (P<.01). Hospital admissions, due to an extended period of time in locoregional anaesthesia recovery, also decreased (P<.01). There was improved prevention of nausea and vomiting, and of poorly controlled pain. The proportion of afternoon admissions was significantly reduced (P<.01). The cancellations increased in 2011 (P<.01). The monitoring of quality parameters in day-case surgery has been a useful tool in our clinical and quality management. Globally, the unplanned transfer and cancellations have been within the quality standards and many of the indicators analysed have improved. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the ``standard`` C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as ``implementation defined.`` Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE`s job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the standard'' C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as implementation defined.'' Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE's job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less
Left-invariant Einstein metrics on S3 ×S3
NASA Astrophysics Data System (ADS)
Belgun, Florin; Cortés, Vicente; Haupt, Alexander S.; Lindemann, David
2018-06-01
The classification of homogeneous compact Einstein manifolds in dimension six is an open problem. We consider the remaining open case, namely left-invariant Einstein metrics g on G = SU(2) × SU(2) =S3 ×S3. Einstein metrics are critical points of the total scalar curvature functional for fixed volume. The scalar curvature S of a left-invariant metric g is constant and can be expressed as a rational function in the parameters determining the metric. The critical points of S, subject to the volume constraint, are given by the zero locus of a system of polynomials in the parameters. In general, however, the determination of the zero locus is apparently out of reach. Instead, we consider the case where the isotropy group K of g in the group of motions is non-trivial. When K ≇Z2 we prove that the Einstein metrics on G are given by (up to homothety) either the standard metric or the nearly Kähler metric, based on representation-theoretic arguments and computer algebra. For the remaining case K ≅Z2 we present partial results.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
NASA Astrophysics Data System (ADS)
Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.
2018-03-01
Context. Standard spectroscopic analyses of variable stars are based on hydrostatic 1D model atmospheres. This quasi-static approach has not been theoretically validated. Aim. We aim at investigating the validity of the quasi-static approximation for Cepheid variables. We focus on the spectroscopic determination of the effective temperature Teff, surface gravity log g, microturbulent velocity ξt, and a generic metal abundance log A, here taken as iron. Methods: We calculated a grid of 1D hydrostatic plane-parallel models covering the ranges in effective temperature and gravity that are encountered during the evolution of a 2D time-dependent envelope model of a Cepheid computed with the radiation-hydrodynamics code CO5BOLD. We performed 1D spectral syntheses for artificial iron lines in local thermodynamic equilibrium by varying the microturbulent velocity and abundance. We fit the resulting equivalent widths to corresponding values obtained from our dynamical model for 150 instances in time, covering six pulsational cycles. In addition, we considered 99 instances during the initial non-pulsating stage of the temporal evolution of the 2D model. In the most general case, we treated Teff, log g, ξt, and log A as free parameters, and in two more limited cases, we fixed Teff and log g by independent constraints. We argue analytically that our approach of fitting equivalent widths is closely related to current standard procedures focusing on line-by-line abundances. Results: For the four-parametric case, the stellar parameters are typically underestimated and exhibit a bias in the iron abundance of ≈-0.2 dex. To avoid biases of this type, it is favorable to restrict the spectroscopic analysis to photometric phases ϕph ≈ 0.3…0.65 using additional information to fix the effective temperature and surface gravity. Conclusions: Hydrostatic 1D model atmospheres can provide unbiased estimates of stellar parameters and abundances of Cepheid variables for particular phases of their pulsations. We identified convective inhomogeneities as the main driver behind potential biases. To obtain a complete view on the effects when determining stellar parameters with 1D models, multidimensional Cepheid atmosphere models are necessary for variables of longer period than investigated here.
NASA Astrophysics Data System (ADS)
Rai, P.; Gautam, N.; Chandra, H.
2018-06-01
This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.
NASA Astrophysics Data System (ADS)
Rai, P.; Gautam, N.; Chandra, H.
2018-02-01
This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.
Geological constraints for muon tomography: The world beyond standard rock
NASA Astrophysics Data System (ADS)
Lechmann, Alessandro; Mair, David; Ariga, Akitaka; Ariga, Tomoko; Ereditato, Antonio; Käser, Samuel; Nishiyama, Ryuichi; Scampoli, Paola; Vladymyrov, Mykhailo; Schlunegger, Fritz
2017-04-01
In present day muon tomography practice, one often encounters an experimental setup in which muons propagate several tens to a few hundreds of meters through a material to the detector. The goal of such an undertaking is usually centred on an attempt to make inferences from the measured muon flux to an anticipated subsurface structure. This can either be an underground interface geometry or a spatial material distribution. Inferences in this direction have until now mostly been done, thereby using the so called "standard rock" approximation. This includes a set of empirically determined parameters from several rocks found in the vicinity of physicist's laboratories. While this approach is reasonable to account for the effects of the tens of meters of soil/rock around a particle accelerator, we show, that for material thicknesses beyond that dimension, the elementary composition of the material (average atomic weight and atomic number) has a noticeable effect on the measured muon flux. Accordingly, the consecutive use of this approximation could potentially lead into a serious model bias, which in turn, might invalidate any tomographic inference, that base on this standard rock approximation. The parameters for standard rock are naturally close to a granitic (SiO2-rich) composition and thus can be safely used in such environments. As geophysical surveys are not restricted to any particular lithology, we investigated the effect of alternative rock compositions (carbonatic, basaltic and even ultramafic) and consequentially prefer to replace the standard rock approach with a dedicated geological investigation. Structural field data and laboratory measurements of density (He-Pycnometer) and composition (XRD) can be merged into an integrative geological model that can be used as an a priori constraint for the rock parameters of interest (density & composition) in the geophysical inversion. Modelling results show that when facing a non-granitic lithology the measured muon flux can vary up to 20-30%, in the case of carbonates and up to 100% for peridotites, compared to standard rock data.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard
2010-12-01
Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
IN718 Additive Manufacturing Properties and Influences
NASA Technical Reports Server (NTRS)
Lambert, Dennis M.
2015-01-01
The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data have been "sanitized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range that was centered about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. Tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable with SLM-produced IN718. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.
IN718 Additive Manufacturing Properties and Influences
NASA Technical Reports Server (NTRS)
Lambert, Dennis M.
2015-01-01
The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data has been "generalized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. SLM-produced IN718, tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of-experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
A search for supersymmetry or other new physics resulting in similar final states is presented using a data sample of 4.73 inverse femtobarns of pp collisions collected atmore » $$ \\sqrt{s}=7 $$ TeV with the CMS detector at the LHC. Fully hadronic final states are selected based on the variable MT2, an extension of the transverse mass in events with two invisible particles. Two complementary studies are performed. The first targets the region of parameter space with medium to high squark and gluino masses, in which the signal can be separated from the standard model backgrounds by a tight requirement on MT2. The second is optimized to be sensitive to events with a light gluino and heavy squarks. In this case, the MT2 requirement is relaxed, but a higher jet multiplicity and at least one b-tagged jet are required. No significant excess of events over the standard model expectations is observed. Exclusion limits are derived for the parameter space of the constrained minimal supersymmetric extension of the standard model, as well as on a variety of simplified model spectra.« less
A new monitor set for the determination of neutron flux parameters in short-time k0-NAA
NASA Astrophysics Data System (ADS)
Kubešová, Marie; Kučera, Jan; Fikrle, Marek
2011-11-01
Multipurpose research reactors such as LVR-15 in Řež require monitoring of the neutron flux parameters (f, α) in each batch of samples analyzed when k0 standardization in NAA is to be used. The above parameters may change quite unpredictably, because experiments in channels adjacent to those used for NAA require an adjustment of the reactor operation parameters and/or active core configuration. For frequent monitoring of the neutron flux parameters the bare multi-monitor method is very convenient. The well-known Au-Zr tri-isotopic monitor set that provides a good tool for determining f and α after long-time irradiation is not optimal in case of short-time irradiation because only a low activity of the 95Zr radionuclide is formed. Therefore, several elements forming radionuclides with suitable half-lives and Q0 and Ēr parameters in a wide range of values were tested, namely 198Au, 56Mn, 88Rb, 128I, 139Ba, and 239U. As a result, an optimal mixture was selected consisting of Au, Mn, and Rb to form a well suited monitor set for irradiation at a thermal neutron fluence rate of 3×1017 m-2 s-1. The procedure of short-time INAA with the new monitor set for k0 standardization was successfully validated using the synthetic reference material SMELS 1 and several matrix reference materials (RMs) representing matrices of sample types frequently analyzed in our laboratory. The results were obtained using the Kayzero for Windows program.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
NASA Astrophysics Data System (ADS)
Fogli, Gianluigi
2005-06-01
We review the status of the neutrino oscillations physics, with a particular emphasis on the present knowledge of the neutrino mass-mixing parameters. We consider first the νμ → ντ flavor transitions of atmospheric neutrinos. It is found that standard oscillations provide the best description of the SK+K2K data, and that the associated mass-mixing parameters are determined at ±1σ (and NDF = 1) as: Δm2 = (2.6 ± 0.4) × 10-3 eV2 and sin 2 2θ = 1.00{ - 0.05}{ + 0.00} . Such indications, presently dominated by SK, could be strengthened by further K2K data. Then we point out that the recent data from the Sudbury Neutrino Observatory, together with other relevant measurements from solar and reactor neutrino experiments, in particular the KamLAND data, convincingly show that the flavor transitions of solar neutrinos are affected by Mikheyev-Smirnov-Wolfenstein (MSW) effects. Finally, we perform an updated analysis of two-family active oscillations of solar and reactor neutrinos in the standard MSW case.
Optimization and evaluation of metal injection molding by using X-ray tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shidi; Zhang, Ruijie; Qu, Xuanhui, E-mail: quxh@ustb.edu.cn
2015-06-15
6061 aluminum alloy and 316L stainless steel green bodies were obtained by using different injection parameters (injection pressure, speed and temperature). After injection process, the green bodies were scanned by X-ray tomography. The projection and reconstruction images show the different kinds of defects obtained by the improper injection parameters. Then, 3D rendering of the Al alloy green bodies was used to demonstrate the spatial morphology characteristics of the serious defects. Based on the scanned and calculated results, it is convenient to obtain the proper injection parameters for the Al alloy. Then, reasons of the defect formation were discussed. During moldmore » filling, the serious defects mainly formed in the case of low injection temperature and high injection speed. According to the gray value distribution of projection image, a threshold gray value was obtained to evaluate whether the quality of green body can meet the desired standard. The proper injection parameters of 316L stainless steel can be obtained efficiently by using the method of analyzing the Al alloy injection. - Highlights: • Different types of defects in green bodies were scanned by using X-ray tomography. • Reasons of the defect formation were discussed. • Optimization of the injection parameters can be simplified greatly by the way of X-ray tomography. • Evaluation standard of the injection process can be obtained by using the gray value distribution of projection image.« less
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Semiclassical approach to heterogeneous vacuum decay
Grinstein, Benjamin; Murphy, Christopher W.
2015-12-10
We derive the decay rate of an unstable phase of a quantum field theory in the presence of an impurity in the thin-wall approximation. This derivation is based on the how the impurity changes the (flat spacetime) geometry relative to case of pure false vacuum. Two examples are given that show how to estimate some of the additional parameters that enter into this heterogeneous decay rate. This formalism is then applied to the Higgs vacuum of the Standard Model (SM), where baryonic matter acts as an impurity in the electroweak Higgs vacuum. We find that the probability for heterogeneous vacuummore » decay to occur is suppressed with respect to the homogeneous case. That is to say, the conclusions drawn from the homogeneous case are not modified by the inclusion of baryonic matter in the calculation. On the other hand, we show that Beyond the Standard Model physics with a characteristic scale comparable to the scale that governs the homogeneous decay rate in the SM, can in principle lead to an enhanced decay rate.« less
The history of the Universe is an elliptic curve
NASA Astrophysics Data System (ADS)
Coquereaux, Robert
2015-06-01
Friedmann-Lemaître equations with contributions coming from matter, curvature, cosmological constant, and radiation, when written in terms of conformal time u rather than in terms of cosmic time t, can be solved explicitly in terms of standard Weierstrass elliptic functions. The spatial scale factor, the temperature, the densities, the Hubble function, and almost all quantities of cosmological interest (with the exception of t itself) are elliptic functions of u, in particular they are bi-periodic with respect to a lattice of the complex plane, when one takes u complex. After recalling the basics of the theory, we use these explicit expressions, as well as the experimental constraints on the present values of density parameters (we choose for the curvature density a small value in agreement with experimental bounds) to display the evolution of the main cosmological quantities for one real period 2{{ω }r} of conformal time (the cosmic time t ‘never ends’ but it goes to infinity for a finite value {{u}f}\\lt 2{{ω }r} of u). A given history of the Universe, specified by the measured values of present-day densities, is associated with a lattice in the complex plane, or with an elliptic curve, and therefore with two Weierstrass invariants {{g}2},{{g}3}. Using the same experimental data we calculate the values of these invariants, as well as the associated modular parameter and the corresponding Klein j-invariant. If one takes the flat case k = 0, the lattice is only defined up to homotheties, and if one, moreover, neglects the radiation contribution, the j-invariant vanishes and the corresponding modular parameter τ can be chosen in one corner of the standard fundamental domain of the modular group (equihanharmonic case: τ =exp (2iπ /3)). Several exact—i.e., non-numerical—results of independent interest are obtained in that case.
Theoretical Analysis of Spacing Parameters of Anisotropic 3D Surface Roughness
NASA Astrophysics Data System (ADS)
Rudzitis, J.; Bulaha, N.; Lungevics, J.; Linins, O.; Berzins, K.
2017-04-01
The authors of the research have analysed spacing parameters of anisotropic 3D surface roughness crosswise to machining (friction) traces RSm1 and lengthwise to machining (friction) traces RSm2. The main issue arises from the RSm2 values being limited by values of sampling length l in the measuring devices; however, on many occasions RSm2 values can exceed l values. Therefore, the mean spacing values of profile irregularities in the longitudinal direction in many cases are not reliable and they should be determined by another method. Theoretically, it is proved that anisotropic surface roughness anisotropy coefficient c=RSm1/RSm2 equals texture aspect ratio Str, which is determined by surface texture standard EN ISO 25178-2. This allows using parameter Str to determine mean spacing of profile irregularities and estimate roughness anisotropy.
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-11-24
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Hidden Sector Dark Matter and the Galactic Center Gamma-Ray Excess: A Closer Look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-09-20
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
NASA Astrophysics Data System (ADS)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-11-01
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case, we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Treatise on water hammer in hydropower standards and guidelines
NASA Astrophysics Data System (ADS)
Bergant, A.; Karney, B.; Pejović, S.; Mazij, J.
2014-03-01
This paper reviews critical water hammer parameters as they are presented in official hydropower standards and guidelines. A particular emphasize is given to a number of IEC standards and guidelines that are used worldwide. The paper critically assesses water hammer control strategies including operational scenarios (closing and opening laws), surge control devices (surge tank, pressure regulating valve, flywheel, etc.), redesign of the water conveyance system components (tunnel, penstock), or limitation of operating conditions (limited operating range) that are variably covered in standards and guidelines. Little information is given on industrial water hammer models and solutions elsewhere. These are briefly introduced and discussed in the light of capability (simple versus complex systems), availability of expertise (in house and/or commercial) and uncertainty. The paper concludes with an interesting water hammer case study referencing the rules and recommendations from existing hydropower standards and guidelines in a view of effective water hammer control. Recommendations are given for further work on development of a special guideline on water hammer (hydraulic transients) in hydropower plants.
NASA Astrophysics Data System (ADS)
Evans, K. D.; Early, A. B.; Northup, E. A.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.; Arctur, D. K.; Beach, A. L., III; Silverman, M. L.
2017-12-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Early, Amanda; Northup, Emily; Ames, Dan; Teng, William; Archur, David; Beach, Aubrey; Olding, Steve; Krotkov, Nickolay A.
2017-01-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
McCarthy, Caroline; Brady, Paul; O'Halloran, Ken D; McCreary, Christine
2016-01-01
Hyperventilation can be a manifestation of anxiety that involves abnormally fast breathing (tachypnea) and an elevated minute ventilation that exceeds metabolic demand. This report describes a case of hyperventilation-induced hypocapnia resulting in tetany in a 16-year-old girl undergoing orthodontic extractions under intravenous conscious sedation. Pulse oximetry is the gold standard respiratory-related index in conscious sedation. Although the parameter has great utility in determining oxygen desaturation, it provides no additional information on respiratory function, including, for example, respiratory rate. In this case, we found capnography to be a very useful aid to monitor respiration in this patient and also to treat the hypocapnia.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)
Meyer, Karin
2008-01-01
Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112
A novel approach to Hough Transform for implementation in fast triggers
NASA Astrophysics Data System (ADS)
Pozzobon, Nicola; Montecassiano, Fabio; Zotto, Pierluigi
2016-10-01
Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled.
Scope of practice: freedom within limits.
Schuiling, K D; Slager, J
2000-01-01
"Scope of practice" has a variety of meanings amongst midwives, other health professionals, health organizations, and consumers of midwifery care. For some, it refers to the Standards for the Practice of Midwifery; for others, it encompasses the legal base of practice; still others equate it with the components of the clinical parameters of practice. Because "scope of practice" is dynamic and parameters of practice can be impacted by many variables, succinctly defining "scope of practice" is difficult. This article provides a comprehensive discussion of the concept "scope of practice." Clinical scenarios are provided as case exemplars. The aim of this paper is to provide both new and experienced midwives with a substantive definition of the concept "scope of practice."
Numerical and Experimental Case Study of Blasting Works Effect
NASA Astrophysics Data System (ADS)
Papán, Daniel; Valašková, Veronika; Drusa, Marian
2016-10-01
This article introduces the theoretical and experimental case study of dynamic monitoring of the geological environment above constructed highway tunnel. The monitored structure is in this case a very important water supply pipeline, which crosses the tunnel and was made from steel tubes with a diameter of 800 mm. The basic dynamic parameters had been monitored during blasting works, and were compared with the FEM (Finite Element Method) calculations and checked by the Slovak standard limits. A calibrated FEM model based on the experimental measurement data results was created and used in order to receive more realistic results in further predictions, time and space extrapolations. This case study was required and demanded by the general contractor company and also by the owner of water pipeline, and it was an answer of public safety evaluation of risks during tunnel construction.
Gadea, Joaquín; Sellés, Elena; Marco, Marco Antonio; Coy, Pilar; Matás, Carmen; Romar, Raquel; Ruiz, Salvador
2004-08-01
Although glutathione content in boar spermatozoa has been previously reported, the effect of reduced glutathione (GSH) on semen parameters and the fertilizing ability of boar spermatozoa after cryopreservation has never been evaluated. In this study, GSH content was determined in ejaculated boar spermatozoa before and after cryopreservation. Semen samples were centrifuged and GSH content in the resulting pellet monitored spectrophotometrically. The fertilizing ability of frozen-thawed boar sperm was also tested in vitro by incubating sperm with in vitro matured oocytes obtained from gilts. GSH content in fresh semen was 3.84 +/- 0.21 nM GSH/10(8) sperm. Following semen cryopreservation, there was a 32% decrease in GSH content (P < 0.0001). There were significant differences in sperm GSH content between different boars and after various preservation protocols (P = 0.0102 ). The effect of addition of GSH to the freezing and thawing extenders was also evaluated. Addition of 5 mM GSH to the freezing extender did not have a significant effect on standard semen parameters or sperm fertilizing ability after thawing. In contrast, when GSH was added to the thawing extender, a dose-dependent tendency to increase in sperm fertilizing ability was observed, although no differences were observed in standard semen parameters. In summary, (i) there was a loss in GSH content after cryopreservation of boar semen; (ii) addition of GSH to the freezing extender did not result in any improvement in either standard semen parameters or sperm fertilizing ability; and (iii) addition of GSH to the thawing extender resulted in a significant increase in sperm fertilizing ability. Nevertheless, future studies must conclude if this is the case for all boars. Furthermore, since addition of GSH to the thawing extender did not result in an improvement in standard semen parameters, this suggests that during the thawing process, GSH prevents damage of a sperm property that is critical in the fertilization process but that is not measured in the routine semen analysis.
Yeh, Chun-Chieh; Wang, Ling-Jia; Mcgarrigle, James J.; Wang, Yong; Liao, Chien-Chang; Omami, Mustafa; Khan, Arshad; Nourmohammadzadeh, Mohammad; Mendoza-Elias, Joshua; Mccracken, Benjamin; Marchese, Enza; Barbaro, Barbara; Oberholzer, Jose
2017-01-01
This study investigates manufacturing procedures that affect islet isolation outcomes from donor pancreata standardized by the North American Islet Donor Score (NAIDS). Islet isolations performed at the University of Illinois, Chicago, from pancreata with NAIDS ≥65 were investigated. The research cohort was categorized into two groups based on a postpurification yield either greater than (group A) or less than (group B) 400,000 IEQ. Associations between manufacturing procedures and islet isolation outcomes were analyzed using multivariate logistic or linear regressions. A total of 119 cases were retrieved from 630 islet isolations performed since 2003. Group A is composed of 40 cases with an average postpurified yield of 570,098 IEQ, whereas group B comprised 79 cases with an average yield of 235,987 IEQ. One third of 119 cases were considered successful islet isolations that yielded >400,000 IEQ. The prepurified and postpurified islet product outcome parameters were detailed for future reference. The NAIDS (>80 vs. 65–80) [odds ratio (OR): 2.91, 95% confidence interval (CI): 1.27–6.70], cold ischemic time (≤10 vs. >10 h) (OR: 3.68, 95% CI: 1.61–8.39), and enzyme perfusion method (mechanical vs. manual) (OR: 2.38, 95% CI: 1.01–5.56) were independent determinants for postpurified islet yield ≥400,000 IEQ. The NAIDS (>80, p < 0.001), cold ischemic time (≤10 h, p < 0.05), increased unit of collagenase (p < 0.01), and pancreatic duct cannulation time (<30 min, p < 0.01) all independently correlated with better islet quantity parameters. Furthermore, cold ischemic time (≤10 h, p < 0.05), liberase MTF (p < 0.001), increased unit of collagenase (p < 0.05), duct cannulation time (<30 min, p < 0.05), and mechanical enzyme perfusion (p < 0.05) were independently associated with better islet morphology score. Analysis of islet manufacturing procedures from the pancreata with standardized quality is essential in identifying technical issues within islet isolation. Adequate processing duration in each step of islet isolation, using liberase MTF, and mechanical enzyme perfusion all affect isolation outcomes. PMID:27524672
Liu, Liang Qin; Moody, Julie; Traynor, Michael; Dyson, Sue; Gall, Angela
2014-11-01
Electrical stimulation (ES) can confer benefit to pressure ulcer (PU) prevention and treatment in spinal cord injuries (SCIs). However, clinical guidelines regarding the use of ES for PU management in SCI remain limited. To critically appraise and synthesize the research evidence on ES for PU prevention and treatment in SCI. Review was limited to peer-reviewed studies published in English from 1970 to July 2013. Studies included randomized controlled trials (RCTs), non-RCTs, prospective cohort studies, case series, case control, and case report studies. Target population included adults with SCI. Interventions of any type of ES were accepted. Any outcome measuring effectiveness of PU prevention and treatment was included. Methodological quality was evaluated using established instruments. Twenty-seven studies were included, 9 of 27 studies were RCTs. Six RCTs were therapeutic trials. ES enhanced PU healing in all 11 therapeutic studies. Two types of ES modalities were identified in therapeutic studies (surface electrodes, anal probe), four types of modalities in preventive studies (surface electrodes, ES shorts, sacral anterior nerve root implant, neuromuscular ES implant). The methodological quality of the studies was poor, in particular for prevention studies. A significant effect of ES on enhancement of PU healing is shown in limited Grade I evidence. The great variability in ES parameters, stimulating locations, and outcome measure leads to an inability to advocate any one standard approach for PU therapy or prevention. Future research is suggested to improve the design of ES devices, standardize ES parameters, and conduct more rigorous trials.
Liu, Liang Qin; Moody, Julie; Traynor, Michael; Dyson, Sue; Gall, Angela
2014-01-01
Context Electrical stimulation (ES) can confer benefit to pressure ulcer (PU) prevention and treatment in spinal cord injuries (SCIs). However, clinical guidelines regarding the use of ES for PU management in SCI remain limited. Objectives To critically appraise and synthesize the research evidence on ES for PU prevention and treatment in SCI. Method Review was limited to peer-reviewed studies published in English from 1970 to July 2013. Studies included randomized controlled trials (RCTs), non-RCTs, prospective cohort studies, case series, case control, and case report studies. Target population included adults with SCI. Interventions of any type of ES were accepted. Any outcome measuring effectiveness of PU prevention and treatment was included. Methodological quality was evaluated using established instruments. Results Twenty-seven studies were included, 9 of 27 studies were RCTs. Six RCTs were therapeutic trials. ES enhanced PU healing in all 11 therapeutic studies. Two types of ES modalities were identified in therapeutic studies (surface electrodes, anal probe), four types of modalities in preventive studies (surface electrodes, ES shorts, sacral anterior nerve root implant, neuromuscular ES implant). Conclusion The methodological quality of the studies was poor, in particular for prevention studies. A significant effect of ES on enhancement of PU healing is shown in limited Grade I evidence. The great variability in ES parameters, stimulating locations, and outcome measure leads to an inability to advocate any one standard approach for PU therapy or prevention. Future research is suggested to improve the design of ES devices, standardize ES parameters, and conduct more rigorous trials. PMID:24969965
CR softcopy display presets based on optimum visualization of specific findings
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.
1999-07-01
The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.
Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2015-02-01
We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.
Test of the cosmic transparency with the standard candles and the standard ruler
NASA Astrophysics Data System (ADS)
Chen, Jun
In this paper, the cosmic transparency is constrained by using the latest baryon acoustic oscillation (BAO) data and the type Ia supernova data with a model-independent method. We find that a transparent universe is consistent with observational data at the 1σ confidence level, except for the case of BAO+ Union 2.1 without the systematic errors where a transparent universe is favored only at the 2σ confidence level. To investigate the effect of the uncertainty of the Hubble constant on the test of the cosmic opacity, we assume h to be a free parameter and obtain that the observations favor a transparent universe at the 1σ confidence level.
The light and heavy Higgs interpretation of the MSSM
Bechtle, Philip; Haber, Howard E.; Heinemeyer, Sven; ...
2017-02-03
We perform a parameter scan of the phenomenological Minimal Supersymmetric Standard Model (pMSSM) with eight parameters taking into account the experimental Higgs boson results from Run I of the LHC and further low-energy observables. We investigate various MSSM interpretations of the Higgs signal at 125 GeV. First, the light CP-even Higgs boson being the discovered particle. In this case it can impersonate the SM Higgslike signal either in the decoupling limit, or in the limit of alignment without decoupling. In the latter case, the other states in the Higgs sector can also be light, offering good prospects for upcoming LHCmore » searches and for searches at future colliders. Second, we demonstrate that the heavy CP-even Higgs boson is still a viable candidate to explain the Higgs signal | albeit only in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. As a guidance for such searches we provide new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation.« less
The light and heavy Higgs interpretation of the MSSM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtle, Philip; Haber, Howard E.; Heinemeyer, Sven
We perform a parameter scan of the phenomenological Minimal Supersymmetric Standard Model (pMSSM) with eight parameters taking into account the experimental Higgs boson results from Run I of the LHC and further low-energy observables. We investigate various MSSM interpretations of the Higgs signal at 125 GeV. First, the light CP-even Higgs boson being the discovered particle. In this case it can impersonate the SM Higgslike signal either in the decoupling limit, or in the limit of alignment without decoupling. In the latter case, the other states in the Higgs sector can also be light, offering good prospects for upcoming LHCmore » searches and for searches at future colliders. Second, we demonstrate that the heavy CP-even Higgs boson is still a viable candidate to explain the Higgs signal | albeit only in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. As a guidance for such searches we provide new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation.« less
Indications of a late-time interaction in the dark sector.
Salvatelli, Valentina; Said, Najla; Bruni, Marco; Melchiorri, Alessandro; Wands, David
2014-10-31
We show that a general late-time interaction between cold dark matter and vacuum energy is favored by current cosmological data sets. We characterize the strength of the coupling by a dimensionless parameter q(V) that is free to take different values in four redshift bins from the primordial epoch up to today. This interacting scenario is in agreement with measurements of cosmic microwave background temperature anisotropies from the Planck satellite, supernovae Ia from Union 2.1 and redshift space distortions from a number of surveys, as well as with combinations of these different data sets. Our analysis of the 4-bin interaction shows that a nonzero interaction is likely at late times. We then focus on the case q(V)≠0 in a single low-redshift bin, obtaining a nested one parameter extension of the standard ΛCDM model. We study the Bayesian evidence, with respect to ΛCDM, of this late-time interaction model, finding moderate evidence for an interaction starting at z=0.9, dependent upon the prior range chosen for the interaction strength parameter q(V). For this case the null interaction (q(V)=0, i.e., ΛCDM) is excluded at 99% C.L.
Sharing criteria and performance standards for the 11.7-12.2 GHz band in region 2
NASA Technical Reports Server (NTRS)
1976-01-01
Possible criteria for sharing between the broadcasting-satellite and the fixed-satellite services are considered for each of several parameters in three categories: system, space station, and earth station. Criteria for sharing between the two satellite services and the three terrestrial services to which the 12-GHz band is allocated are discussed separately, first for the case of the fixed and mobile services and then for the broadcasting service.
Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations
2014-09-30
ultimately developed a novel mathematical method to solve the system of equations involving the addition of a numerical “ ghost ” layer, as described in the...balance models ( EBMs ) and (ii) seasonally-varying single-column models (SCMs). As described in Approach item #1, we developed an idealized model that...includes both latitudinal and seasonal variations (Fig. 1). The model reduces to a standard EBM or SCM as limiting cases in the parameter space, thus
[Cardiac safety of electroconvulsive therapy in an elderly patient--a case report].
Karakuła-Juchnowicz, Hanna; Próchnicki, Michał; Kiciński, Paweł; Olajossy, Marcin; Pelczarska-Jamroga, Agnieszka; Dzikowski, Michał; Jaroszyński, Andrzej
2015-10-01
Since electroconvulsive therapy (ECT) was introduced as treatment for psychiatric disorders in 1938, it has remained one of the most effective therapeutic methods. ECT is often used as a "treatment of last resort" when other methods fail, and a life-saving procedure in acute clinical states when a rapid therapeutic effect is needed. Mortality associated with ECT is lower, compared to the treatment with tricyclic antidepressants, and comparable to that observed in so-called minor surgery. In the literature, cases of effective and safe electroconvulsive therapy have been described in patients of advanced age, with a burden of many somatic disorders. However, cases of acute cardiac episodes have also been reported during ECT. The qualification of patients for ECT and the selection of a group of patients at the highest risk of cardiovascular complications remains a serious clinical problem. An assessment of the predictive value of parameters of standard electrocardiogram (ECG), which is a simple, cheap and easily available procedure, deserves special attention. This paper reports a case of a 74-year-old male patient treated with ECT for a severe depressive episode, in the context of cardiologic safety. Both every single ECT session and the full course were assessed to examine their impact on levels of troponin T, which is a basic marker of cardiac damage, and selected ECG parameters (QTc, QRS). In the presented case ECT demonstrated its high general and cardiac safety with no negative effect on cardiac troponin (TnT) levels, corrected QT interval (QTc) duration, or other measured ECG parameters despite initially increased troponin levels, the patient's advanced age, the burden of a severe somatic disease and its treatment (anticancer therapy). © 2015 MEDPRESS.
Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L
2013-04-16
Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tridon, F.; Battaglia, A.; Luke, E.
A recently developed technique retrieving the binned raindrop size distributions (DSDs) and air state parameters from ground-based K a and W-band radars Doppler spectra profiles is improved and applied to a typical midlatitude rain event. The retrievals are thoroughly validated against DSD observations of a 2D video disdrometer and independent X-band observations. Here for this case-study, profiles of rain rate, R, mean volume diameter and concentration parameter are retrieved, with low bias and standard deviations. In light rain (0.1 < R < 1 mm h -1), the radar reflectivities must be calibrated with a collocated disdrometer which introduces random errorsmore » due to sampling mismatch between the two instruments. The best performances are obtained in moderate rain (1 < R < 20 mm h -1) where the retrieval is providing self-consistent estimates of the absolute calibration and of the attenuation caused by antenna or radome wetness for both radars.« less
Cosmic-ray antiprotons, positrons, and gamma rays from halo dark matter annihilation
NASA Technical Reports Server (NTRS)
Rudaz, S.; Stecker, F. W.
1988-01-01
The subject of cosmic ray antiproton production is reexamined by considering other choices for the nature of the Majorana fermion chi other than the photino considered in a previous article. The calculations are extended to include cosmic-ray positrons and cosmic gamma rays as annihilation products. Taking chi to be a generic higgsino or simply a heavy Majorana neutrino with standard couplings to the Z-zero boson allows the previous interpretation of the cosmic antiproton data to be maintained. In this case also, the annihilation cross section can be calculated independently of unknown particle physics parameters. Whereas the relic density of photinos with the choice of parameters in the previous paper turned out to be only a few percent of the closure density, the corresponding value for Omega in the generic higgsino or Majorana case is about 0.2, in excellent agreement with the value associated with galaxies and one which is sufficient to give the halo mass.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
The HEUN-SCHRÖDINGER Radial Equation for Dh-Atoms
NASA Astrophysics Data System (ADS)
Tarasov, V. F.
This article deals with the connection between Schrödinger's multidimensional equation for DH-atoms (D≥1) and the confluent Heun equation with two auxiliary parameters ν and τ, where |1-ν| = o(1) and τ∈ℚ+, which influence the spectrum of eigenvalues, the Coulomb potential and the radial function. The case τ = ν = 1 and D = 3 corresponds to the "standard" form of Schrödinger's equation for a 3H-atom. With the help of parameter ν, e.g., some "quantum corrections" may be considered. The cases 0<τ<1 and τ>1, but â = (n-l-1)τ≥0 is an integer, change the "geometry" of the electron cloud in the atom, i.e. the so-called "exotic" 3H-like atoms arise, where Kummer's function 1F1(-â c; z) has â zeros and the discrete spectrum depends only on Z/(νn) but not on l and τ. Diagrams of the radial functions hat Pnl(r;τ ,ν ) as n≤3 are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, C., E-mail: cbarone@unisa.it; Pagano, S., E-mail: spagano@unisa.it; Méchin, L.
2014-03-21
The problem of non-standard scaling of the 1/f noise in thin manganite films was revisited in the above paper, suggesting the quantum theory of fundamental flicker noise for the interpretation of the unusual dependence of the normalized Hooge parameter on the sample volume. Experimental evidence has been reported, showing that in these materials such volume dependence is, instead, an artifact of extrinsic noise sources, e.g., contact noise. Moreover, the proposed theoretical model implies a linear temperature dependence of the Hooge parameter, which is against the experimental data reported here. Based on these arguments, it is possible to conclude that themore » quantum theory of fundamental flicker noise cannot be applied to the case of La{sub 2∕3}Sr{sub 1∕3}MnO{sub 3} thin films.« less
Strange stars in f( R) theories of gravity in the Palatini formalism
NASA Astrophysics Data System (ADS)
Panotopoulos, Grigoris
2017-05-01
In the present work we study strange stars in f( R) theories of gravity in the Palatini formalism. We consider two concrete well-known cases, namely the R+R^2/(6 M^2) model as well as the R-μ ^4/R model for two different values of the mass parameter M or μ . We integrate the modified Tolman-Oppenheimer-Volkoff equations numerically, and we show the mass-radius diagram for each model separately. The standard case corresponding to the General Relativity is also shown in the same figure for comparison. Our numerical results show that the interior solution can be vastly different depending on the model and/or the value of the parameter of each model. In addition, our findings imply that (i) for the cosmologically interesting values of the mass scales M,μ the effect of modified gravity on strange stars is negligible, while (ii) for the values predicting an observable effect, the modified gravity models discussed here would be ruled out by their cosmological effects.
Tridon, F.; Battaglia, A.; Luke, E.; ...
2017-01-27
A recently developed technique retrieving the binned raindrop size distributions (DSDs) and air state parameters from ground-based K a and W-band radars Doppler spectra profiles is improved and applied to a typical midlatitude rain event. The retrievals are thoroughly validated against DSD observations of a 2D video disdrometer and independent X-band observations. Here for this case-study, profiles of rain rate, R, mean volume diameter and concentration parameter are retrieved, with low bias and standard deviations. In light rain (0.1 < R < 1 mm h -1), the radar reflectivities must be calibrated with a collocated disdrometer which introduces random errorsmore » due to sampling mismatch between the two instruments. The best performances are obtained in moderate rain (1 < R < 20 mm h -1) where the retrieval is providing self-consistent estimates of the absolute calibration and of the attenuation caused by antenna or radome wetness for both radars.« less
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.
Dhandapani, N V; Thangarasu, V S; Sureshkannan, G
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process
Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267
Cost-effectiveness of renin-guided treatment of hypertension.
Smith, Steven M; Campbell, Jonathan D
2013-11-01
A plasma renin activity (PRA)-guided strategy is more effective than standard care in treating hypertension (HTN). However, its clinical implementation has been slow, presumably due in part to economic concerns. We estimated the cost effectiveness of a PRA-guided treatment strategy compared with standard care in a treated but uncontrolled HTN population. We estimated costs, quality-adjusted life years (QALYs), and the incremental cost-effectiveness ratio (ICER) of PRA-guided therapy compared to standard care using a state-transition simulation model with alternate patient characteristic scenarios and sensitivity analyses. Patient-specific inputs for the base case scenario, males average age 63 years, reflected best available data from a recent clinical trial of PRA-guided therapy. Transition probabilities were estimated using Framingham risk equations or derived from the literature; costs and utilities were derived from the literature. In the base case scenario for males, the lifetime discounted costs and QALYs were $23,648 and 12.727 for PRA-guided therapy and $22,077 and 12.618 for standard care, respectively. The base case ICER was $14,497/QALY gained. In alternative scenario analyses varying patient input parameters, the results were sensitive to age, gender, baseline systolic blood pressure, and the addition of cardiovascular risk factors. Univariate sensitivity analyses demonstrated that results were most sensitive to varying the treatment effect of PRA-guided therapy and the cost of the PRA test. Our results suggest that PRA-guided therapy compared with standard care increases QALYs and medical costs in most scenarios. PRA-guided therapy appears to be most cost effective in younger persons and those with more cardiovascular risk factors. © American Journal of Hypertension, Ltd 2013. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Colored noise effects on batch attitude accuracy estimates
NASA Technical Reports Server (NTRS)
Bilanow, Stephen
1991-01-01
The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.
SFEMG in ocular myasthenia gravis diagnosis.
Padua, L; Stalberg, E; LoMonaco, M; Evoli, A; Batocchi, A; Tonali, P
2000-07-01
In typical cases, the patient's history and clinical examination make it possible to diagnose ocular myasthenia gravis (OMG). But, in many cases a clear clinical picture is not present and OMG diagnosis is very difficult because gold diagnostic standard tests are not available. The diagnostic tests for OMG are usually unable to display a good sensitivity and specificity simultaneously. In this paper, we studied 86 cases submitted for suspected OMG. The patients were studied clinically and with various other tests used in OMG diagnosis (SFEMG, repetitive nerve stimulation, Ab anti AChR titration, tensilon test). SFEMG showed the highest sensitivity (100%) while Ab anti AChR showed the highest specificity (100%). To our knowledge this is the largest population of suspected OMG studied using most of the diagnostic parameters, reported in the literature.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Asymptotic solutions for the case of nearly symmetric gravitational lens systems
NASA Astrophysics Data System (ADS)
Wertz, O.; Pelgrims, V.; Surdej, J.
2012-08-01
Gravitational lensing provides a powerful tool to determine the Hubble parameter H0 from the measurement of the time delay Δt between two lensed images of a background variable source. Nevertheless, knowledge of the deflector mass distribution constitutes a hurdle. We propose in the present work interesting solutions for the case of nearly symmetric gravitational lens systems. For the case of a small misalignment between the source, the deflector and the observer, we first consider power-law (ɛ) axially symmetric models for which we derive an analytical relation between the amplification ratio and source position which is independent of the power-law slope ɛ. According to this relation, we deduce an expression for H0 also irrespective of the value ɛ. Secondly, we consider the power-law axially symmetric lens models with an external large-scale gravitational field, the shear γ, resulting in the so-called ɛ-γ models, for which we deduce simple first-order equations linking the model parameters and the lensed image positions, the latter being observable quantities. We also deduce simple relations between H0 and observables quantities only. From these equations, we may estimate the value of the Hubble parameter in a robust way. Nevertheless, comparison between the ɛ-γ and singular isothermal ellipsoid (SIE) models leads to the conclusion that these models remain most often distinct. Therefore, even for the case of a small misalignment, use of the first-order equations and precise astrometric measurements of the positions of the lensed images with respect to the centre of the deflector enables one to discriminate between these two families of models. Finally, we confront the models with numerical simulations to evaluate the intrinsic error of the first-order expressions used when deriving the model parameters under the assumption of a quasi-alignment between the source, the deflector and the observer. From these same simulations, we estimate for the case of the ɛ-γ family of models that the standard deviation affecting H0 is ? which merely reflects the adopted astrometric uncertainties on the relative image positions, typically ? arcsec. In conclusions, we stress the importance of getting very accurate measurements of the relative positions of the multiple lensed images and of the time delays for the case of nearly symmetric gravitational lens systems, in order to derive robust and precise values of the Hubble parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Geoffrey Wayne
2016-03-16
This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less
INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING
Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong
2017-01-01
Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363
NASA Astrophysics Data System (ADS)
Wibowo; Fadillah, Y.
2018-03-01
Efficiency in a construction works is a very important thing. Concrete with ease of workmanship and rapid achievement of service strength will to determine the level of efficiency. In this research, we studied the optimization of accelerator usage in achieving performance on compressive strength of concrete in function of time. The addition of variation of 0.3% - 2.3% to the weight of cement gives a positive impact of the rapid achievement of hardened concrete, however the speed of increasing of concrete strength achievement in term of time influence present increasing value of filling ability parameter of self-compacting concrete. The right composition of accelerator aligned with range of the values standard of filling ability parameters of HSSCC will be an advantage guidance for producers in the ready-mix concrete industry.
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
Impulse Current Waveform Compliance with IEC 60060-1
NASA Astrophysics Data System (ADS)
Sato, Shuji; Harada, Tatsuya; Yokoyama, Taizou; Sakaguchi, Sumiko; Ebana, Takao; Saito, Tatsunori
After numerous simulations, authors could unsuccessfully design an impulse current calibrator, whose output's time parameters (front time, T1 and time to half the peak, T2 ) are quite close to ideals defined in IEC 60060-1. The investigation for the failed trial was commenced. Using normalized damped oscillating waveform e-tsin(ωt), a relationship of the ratio T2/T1 and undershoot value are studied for all possible value for . With this relationship, it is derived that 1) one cannot generate an ideal wave form unless one has to accept a certain margin for the two parameters, 2) even with the allowable margin, one can generate a wave form only in a case a value for T1 is smaller and T2 is larger than standard values. In the paper, possible time parameter combination, which fulfils IEC 60060-1 requirements, is illustrated for a calibrator design.
NASA Astrophysics Data System (ADS)
Farroni, Flavio; Lamberti, Raffaele; Mancinelli, Nicolò; Timpone, Francesco
2018-03-01
Tyres play a key role in ground vehicles' dynamics because they are responsible for traction, braking and cornering. A proper tyre-road interaction model is essential for a useful and reliable vehicle dynamics model. In the last two decades Pacejka's Magic Formula (MF) has become a standard in simulation field. This paper presents a Tool, called TRIP-ID (Tyre Road Interaction Parameters IDentification), developed to characterize and to identify with a high grade of accuracy and reliability MF micro-parameters from experimental data deriving from telemetry or from test rig. The tool guides interactively the user through the identification process on the basis of strong diagnostic considerations about the experimental data made evident by the tool itself. A motorsport application of the tool is shown as a case study.
LHC searches for dark sector showers
NASA Astrophysics Data System (ADS)
Cohen, Timothy; Lisanti, Mariangela; Lou, Hou Keong; Mishra-Sharma, Siddharth
2017-11-01
This paper proposes a new search program for dark sector parton showers at the Large Hadron Collider (LHC). These signatures arise in theories characterized by strong dynamics in a hidden sector, such as Hidden Valley models. A dark parton shower can be composed of both invisible dark matter particles as well as dark sector states that decay to Standard Model particles via a portal. The focus here is on the specific case of `semi-visible jets,' jet-like collider objects where the visible states in the shower are Standard Model hadrons. We present a Simplified Model-like parametrization for the LHC observables and propose targeted search strategies for regions of parameter space that are not covered by existing analyses. Following the `mono- X' literature, the portal is modeled using either an effective field theoretic contact operator approach or with one of two ultraviolet completions; sensitivity projections are provided for all three cases. We additionally highlight that the LHC has a unique advantage over direct detection experiments in the search for this class of dark matter theories.
Constraining torsion with Gravity Probe B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao Yi; Guth, Alan H.; Cabi, Serkan
2007-11-15
It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) suchmore » as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.« less
Renormalized Two-Fluid Hydrodynamics of Cosmic-Ray--modified Shocks
NASA Astrophysics Data System (ADS)
Malkov, M. A.; Voelk, H. J.
1996-12-01
A simple two-fluid model of diffusive shock acceleration, introduced by Axford, Leer, & Skadron and Drury & Völk, is revisited. This theory became a chief instrument in the studies of shock modification due to particle acceleration. Unfortunately its most intriguing steady state prediction about a significant enhancement of the shock compression and a corresponding increase of the cosmic-ray production violates assumptions which are critical for the derivation of this theory. In particular, for strong shocks the spectral flattening makes a cutoff-independent definition of pressure and energy density impossible and therefore causes an additional closure problem. Confining ourselves for simplicity to the case of plane shocks, assuming reacceleration of a preexisting cosmic-ray population, we argue that also under these circumstances the kinetic solution has a rather simple form. It can be characterized by only a few parameters, in the simplest case by the slope and the magnitude of the momentum distribution at the upper momentum cutoff. We relate these parameters to standard hydrodynamic quantities like the overall shock compression ratio and the downstream cosmic-ray pressure. The two-fluid theory produced in this way has the traditional form but renormalized closure parameters. By solving the renormalized Rankine-Hugoniot equations, we show that for the efficient stationary solution, most significant for cosmic-ray acceleration, the renormalization is needed in the whole parameter range of astrophysical interest.
Deflection of light by black holes and massless wormholes in massive gravity
NASA Astrophysics Data System (ADS)
Jusufi, Kimet; Sarkar, Nayan; Rahaman, Farook; Banerjee, Ayan; Hansraj, Sudan
2018-04-01
Weak gravitational lensing by black holes and wormholes in the context of massive gravity (Bebronne and Tinyakov, JHEP 0904:100, 2009) theory is studied. The particular solution examined is characterized by two integration constants, the mass M and an extra parameter S namely `scalar charge'. These black hole reduce to the standard Schwarzschild black hole solutions when the scalar charge is zero and the mass is positive. In addition, a parameter λ in the metric characterizes so-called `hair'. The geodesic equations are used to examine the behavior of the deflection angle in four relevant cases of the parameter λ . Then, by introducing a simple coordinate transformation r^λ =S+v^2 into the black hole metric, we were able to find a massless wormhole solution of Einstein-Rosen (ER) (Einstein and Rosen, Phys Rev 43:73, 1935) type with scalar charge S. The programme is then repeated in terms of the Gauss-Bonnet theorem in the weak field limit after a method is established to deal with the angle of deflection using different domains of integration depending on the parameter λ . In particular, we have found new analytical results corresponding to four special cases which generalize the well known deflection angles reported in the literature. Finally, we have established the time delay problem in the spacetime of black holes and wormholes, respectively.
Accuracy of ultrasound in the detection of liver fibrosis in chronic viral hepatitis.
D'Onofrio, Mirko; Martone, Enrico; Brunelli, Silvia; Faccioli, Niccolò; Zamboni, Giulia; Zagni, Irene; Fattovich, Giovanna; Pozzi Mucelli, Roberto
2005-10-01
To assess the accuracy of ultrasonography (US) in the identification and grading of hepatic fibrosis in patients afflicted with chronic viral liver disease, compared to histological examination as a gold standard. We prospectively studied 105 patients (32 F, 73 M) affected by chronic viral liver disease in 36 months. Patients were studied with B-mode US and then underwent US-guided liver biopsy. All the patients were studied with conventional US with a Sequoia 512, 6.0 (Acuson, Mountain View CA, USA). We evaluated the following US parameters: liver margins, parenchymal echotexture, portal vein caliber and spleen diameter. The four B-mode US parameters were used for the US grading (from 0 to 4). Scheuer's grading (from 0 to 4) was used for the histological score. Grades 3 and 4 were considered as positive for fibrosis. Sensitivity, specificity, positive and negative predictive values and accuracy were calculated in the case of absence, positivity of one or all the US parameters. The correlation between US and histological scores was evaluated with Spearman's test. At histology seventy-seven patients (73%) had absent grade 0 (1 patient; 1%), low-moderate grade 1 (35 patients; 33%) or grade 2 (41 patients; 39%) liver fibrosis. Twenty-eight patients (27%) had severe grade 3 (16 patients; 15%) or grade 4 (12 patients; 11%) fibrosis. In the case of absence of US parameters sensitivity was 32%, specificity 32%, positive predictive value 15%, negative predictive value 57% and accuracy 32%. In the case of positivity of at least one of the US parameters the values were 68%, 68%, 43%, 84% and 69%. In the case of presence of all the US signs the results were 25%, 100%, 100%, 79% and 80%. None of the 77 patients with a healthy liver or with low-grade fibrosis was positive for all the US parameters. All the patients positive for all of the ultrasonographic parameters had high-grade fibrosis or cirrhosis at liver biopsy. Correlation between B-mode and histological scores was not statistically significant (Rs=0.45; p=0.0001). US identification of liver fibrosis in chronic liver disease is possible with 25% sensitivity, 100% specificity, 100% positive predictive value and 79% negative predictive value, with an 80% diagnostic accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, M. Q.; Alkofer, R.; Sorella, S. P.
2010-03-15
The low momentum behavior of the Landau gauge Gribov-Zwanziger action is investigated using the respective Dyson-Schwinger equations. Because of the mixing of the gluon and the auxiliary fields four scenarios can be distinguished for the infrared behavior. Two of them lead to inconsistencies and can be discarded. Another one corresponds to the case where the auxiliary fields behave exactly like the Faddeev-Popov ghosts and the same scaling relation as in standard Landau gauge, {kappa}{sub A}+2{kappa}{sub c}=0, is valid. Even the parameter {kappa} is found to be the same, 0.595. The mixed propagators, which appear, are suppressed in all loops, andmore » their anomalous infrared exponent can also be determined. A fourth case provides an even stricter scaling relation that includes also the mixed propagators, but possesses the same qualitative feature, i.e. the propagators of the Faddeev-Popov ghost and the auxiliary fields are infrared enhanced and the mixed and the gluon propagators are infrared suppressed. In this case the system of equations to obtain the parameter {kappa} is nonlinear in all variables.« less
Artificial neural networks for modeling ammonia emissions released from sewage sludge composting
NASA Astrophysics Data System (ADS)
Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.
2012-09-01
The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).
Improving Information Exchange in the Chicken Processing Sector Using Standardised Data Lists
NASA Astrophysics Data System (ADS)
Donnelly, Kathryn Anne-Marie; van der Roest, Joop; Höskuldsson, Stefán Torfi; Olsen, Petter; Karlsen, Kine Mari
Research has shown that to improve electronic communication between companies, universal standardised data lists are necessary. In food supply chains in particular there is an increased need to exchange data in the wake of food safety incidents. Food supply chain companies already record numerous measurements, properties and parameters. These records are necessary for legal reasons, labelling, traceability, profiling desirable characteristics, showing compliance and for meeting customer requirements. Universal standards for name and content of each of these data elements would improve information exchange between buyers, sellers, authorities, consumers and other interested parties. A case study, carried out for the chicken sector, attempted to identify the most relevant parameters including which of these were already communicated to external bodies.
Quark ensembles with the infinite correlation length
NASA Astrophysics Data System (ADS)
Zinov'ev, G. M.; Molodtsov, S. V.
2015-01-01
A number of exactly integrable (quark) models of quantum field theory with the infinite correlation length have been considered. It has been shown that the standard vacuum quark ensemble—Dirac sea (in the case of the space-time dimension higher than three)—is unstable because of the strong degeneracy of a state, which is due to the character of the energy distribution. When the momentum cutoff parameter tends to infinity, the distribution becomes infinitely narrow, leading to large (unlimited) fluctuations. Various vacuum ensembles—Dirac sea, neutral ensemble, color superconductor, and BCS state—have been compared. In the case of the color interaction between quarks, the BCS state has been certainly chosen as the ground state of the quark ensemble.
McCarthy, Caroline; Brady, Paul; O'Halloran, Ken D.; McCreary, Christine
2016-01-01
Hyperventilation can be a manifestation of anxiety that involves abnormally fast breathing (tachypnea) and an elevated minute ventilation that exceeds metabolic demand. This report describes a case of hyperventilation-induced hypocapnia resulting in tetany in a 16-year-old girl undergoing orthodontic extractions under intravenous conscious sedation. Pulse oximetry is the gold standard respiratory-related index in conscious sedation. Although the parameter has great utility in determining oxygen desaturation, it provides no additional information on respiratory function, including, for example, respiratory rate. In this case, we found capnography to be a very useful aid to monitor respiration in this patient and also to treat the hypocapnia. PMID:26866408
Aspects of noncommutative (1+1)-dimensional black holes
NASA Astrophysics Data System (ADS)
Mureika, Jonas R.; Nicolini, Piero
2011-08-01
We present a comprehensive analysis of the spacetime structure and thermodynamics of (1+1)-dimensional black holes in a noncommutative framework. It is shown that a wider variety of solutions are possible than the commutative case considered previously in the literature. As expected, the introduction of a minimal length θ cures singularity pathologies that plague the standard two-dimensional general relativistic case, where the latter solution is recovered at large length scales. Depending on the choice of input parameters (black hole mass M, cosmological constant Λ, etc.), black hole solutions with zero, up to six, horizons are possible. The associated thermodynamics allows for the either complete evaporation, or the production of black hole remnants.
Can nonstandard interactions jeopardize the hierarchy sensitivity of DUNE?
NASA Astrophysics Data System (ADS)
Deepthi, K. N.; Goswami, Srubabati; Nath, Newton
2017-10-01
We study the effect of nonstandard interactions (NSIs) on the propagation of neutrinos through the Earth's matter and how it affects the hierarchy sensitivity of the DUNE experiment. We emphasize the special case when the diagonal NSI parameter ɛe e=-1 , nullifying the standard matter effect. We show that if, in addition, C P violation is maximal then this gives rise to an exact intrinsic hierarchy degeneracy in the appearance channel, irrespective of the baseline and energy. Introduction of the off diagonal NSI parameter, ɛe τ, shifts the position of this degeneracy to a different ɛe e. Moreover the unknown magnitude and phases of the off diagonal NSI parameters can give rise to additional degeneracies. Overall, given the current model independent limits on NSI parameters, the hierarchy sensitivity of DUNE can get seriously impacted. However, a more precise knowledge of the NSI parameters, especially ɛe e, can give rise to an improved sensitivity. Alternatively, if a NSI exists in nature, and still DUNE shows hierarchy sensitivity, certain ranges of the NSI parameters can be excluded. Additionally, we briefly discuss the implications of ɛe e=-1 (in the Earth) on the Mikheyev-Smirnov-Wolfenstein effect in the Sun.
NASA Astrophysics Data System (ADS)
El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.
2016-02-01
Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.
Robust Regression through Robust Covariances.
1985-01-01
we apply (2.3). But first let us examine the influence function (see Hampel (1974)). In order to simplify the formulas we will first consider the case...remember that the influence function is an asymptotic 0tooL" and that therefore the population Values of our estimators appear in the formula. V(GR) is...the parameter a , V) based on the data Z1 , ... DZ. via tp =~t 0. Now we can apply the standard formulas to get influence function (see Huber (1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tereshchenko, M. A., E-mail: maxt@inbox.ru
A study is made of the microwave beam evolution due to passing through the stagnation zone, where the group velocity vanishes, thus making the paraxial approximation for the wavefield inappropriate. An extension to the standard beam tracing technique is suggested that allows one to calculate the microwave beam parameters on either branch of its path apart from the stagnation zone, omitting the calculation of the wavefield inside it. Application examples of the extended technique are presented for the case of microwave reflection from the upper hybrid resonance layer in a tokamak plasma.
Critical and compensation phenomena in a mixed-spin ternary alloy: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Žukovič, M.; Bobák, A.
2010-10-01
By means of standard and histogram Monte Carlo simulations, we investigate the critical and compensation behaviour of a ternary mixed spin alloy of the type ABpC1- p on a cubic lattice. We focus on the case with the parameters corresponding to the Prussian blue analog (NipIIMn1-pII)1.5[CrIII(CN)6]·nH2O and confront our findings with those obtained by some approximative approaches and the experiments.
Actively mode-locked diode laser with a mode spacing stability of ∼6 × 10{sup -14}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakharyash, V F; Kashirsky, A V; Klementyev, V M
We have studied mode spacing stability in an actively mode-locked external-cavity semiconductor laser. It has been shown that, in the case of mode spacing pulling to the frequency of a highly stable external microwave signal produced by a hydrogen standard (stability of 4 × 10{sup -14} over an averaging period τ = 10 s), this configuration ensures a mode spacing stability of 5.92 × 10{sup -14} (τ = 10 s). (control of radiation parameters)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gromov, N. A., E-mail: gromov@dm.komisc.ru
The very weak neutrino-matter interactions are explained with the help of the gauge group contraction of the standard Electroweak Model. The mathematical contraction procedure is connected with the energy dependence of the interaction cross section for neutrinos and corresponds to the limiting case of the Electroweak Model at low energies. Contraction parameter is connected with the universal Fermi constant of weak interactions and neutrino energy as j{sup 2}(s) = {radical}(G{sub F} s)
Premise for Standardized Sepsis Models.
Remick, Daniel G; Ayala, Alfred; Chaudry, Irshad; Coopersmith, Craig M; Deutschman, Clifford; Hellman, Judith; Moldawer, Lyle; Osuchowski, Marcin
2018-06-05
Sepsis morbidity and mortality exacts a toll on patients and contributes significantly to healthcare costs. Preclinical models of sepsis have been used to study disease pathogenesis and test new therapies, but divergent outcomes have been observed with the same treatment even when using the same sepsis model. Other disorders such as diabetes, cancer, malaria, obesity and cardiovascular diseases have used standardized, preclinical models that allow laboratories to compare results. Standardized models accelerate the pace of research and such models have been used to test new therapies or changes in treatment guidelines. The National Institutes of Health (NIH) mandated that investigators increase data reproducibility and the rigor of scientific experiments and has also issued research funding announcements about the development and refinement of standardized models. Our premise is that refinement and standardization of preclinical sepsis models may accelerate the development and testing of potential therapeutics for human sepsis, as has been the case with preclinical models for other disorders. As a first step towards creating standardized models, we suggest 1) standardizing the technical standards of the widely used cecal ligation and puncture model and 2) creating a list of appropriate organ injury and immune dysfunction parameters. Standardized sepsis models could enhance reproducibility and allow comparison of results between laboratories and may accelerate our understanding of the pathogenesis of sepsis.
The reliability of forensic osteology--a case in point. Case study.
Kemkes-Grottenthaler, A
2001-03-01
The medico-legal investigation of skeletons is a trans-disciplinary effort by forensic scientists as well as physical anthropologists. The advent of DNA extraction and amplification from bones and teeth has led to the assumption that morphological assessment of skeletal remains might soon become obsolete. But despite the introduction and success of molecular biology, the analysis of skeletal biology will remain an integral part of the identification process. This is due to the fact, that the skeletal record allows relatively fast and accurate inferences about the identity of the victim. Moreover, a standard biological profile may be established to effectively narrow the police investigator's search parameters. The following study demonstrates how skeletal biology may collaborate in the forensic investigation and support DNA fingerprinting evidence. In this case, the information gained from standard morphological methods about the unknown person's sex, age and heritage immediately led the police to suspect, that the remains were that of a young man from Vietnam, who had been missing for 2.5 years. The investigation then quickly shifted to prove the victim's identity via DNA extraction and mtDNA sequence analysis and biostatistical calculations involving questions of kinship [4].
Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities
NASA Astrophysics Data System (ADS)
Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.
A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).
Comparison of vibration damping of standard and PDCPD housing of the electric power steering system
NASA Astrophysics Data System (ADS)
Płaczek, M.; Wróbel, A.; Baier, A.
2017-08-01
A comparison of two different types of electric power steering system housing is presented. The first considered type of the housing was a standard one that is made of an aluminium alloy. The second one is made of polydicyclopentadiene polymer (PDCPD) and was produced using the RIM technology. Considered elements were analysed in order to verify their properties of vibrations damping. This property is very important taking into account noise generated by elements of a car’s power steering system. During the carried out tests vibrations of analysed power steering housings were measured using Marco Fiber Composite (MFC) piezoelectric transducers. Results obtained for both considered power steering housings in case of the same parameters of vibrations excitations were measured and juxtaposed. Obtained results were analysed in order to verify if the housing made of PDCPD polymer has better properties of vibration damping than the standard one.
Petrovic, Igor; Hip, Ivan; Fredlund, Murray D
2016-09-01
The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The analysis also showed that a satisfactory level of performance with a small probability of failure was produced for the standard practice design of waste landfills as well as an analysis scenario immediately after the landfill closure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Adaptive estimation of a time-varying phase with a power-law spectrum via continuous squeezed states
NASA Astrophysics Data System (ADS)
Dinani, Hossein T.; Berry, Dominic W.
2017-06-01
When measuring a time-varying phase, the standard quantum limit and Heisenberg limit as usually defined, for a constant phase, do not apply. If the phase has Gaussian statistics and a power-law spectrum 1 /|ω| p with p >1 , then the generalized standard quantum limit and Heisenberg limit have recently been found to have scalings of 1 /N(p -1 )/p and 1 /N2 (p -1 )/(p +1 ) , respectively, where N is the mean photon flux. We show that this Heisenberg scaling can be achieved via adaptive measurements on squeezed states. We predict the experimental parameters analytically, and test them with numerical simulations. Previous work had considered the special case of p =2 .
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2018-02-01
Monte Carlo method is applied to the study of relaxation of excited electron-hole (e-h) pairs in graphene. The presence of background of spin-polarized electrons, with high density imposing degeneracy conditions, is assumed. To such system, a number of e-h pairs with spin polarization parallel or antiparallel to the background is injected. Two stages of relaxation: thermalization and cooling are clearly distinguished when average particles energy < E> and its standard deviation σ _E are examined. At the very beginning of thermalization phase, holes loose energy to electrons, and after this process is substantially completed, particle distributions reorganize to take a Fermi-Dirac shape. To describe the evolution of < E > and σ _E during thermalization, we define characteristic times τ _ {th} and values at the end of thermalization E_ {th} and σ _ {th}. The dependence of these parameters on various conditions, such as temperature and background density, is presented. It is shown that among the considered parameters, only the standard deviation of electrons energy allows to distinguish between different cases of relative spin polarizations of background and excited electrons.
Physiological parameters monitoring of fire-fighters by means of a wearable wireless sensor system
NASA Astrophysics Data System (ADS)
Stelios, M.; Mitilineos, Stelios A.; Chatzistamatis, Panagiotis; Vassiliadis, Savvas; Primentas, Antonios; Kogias, Dimitris; Michailidis, Emmanouel T.; Rangoussi, Maria; Kurşun Bahadir, Senem; Atalay, Özgür; Kalaoğlu, Fatma; Sağlam, Yusuf
2016-03-01
Physiological parameter monitoring may be useful in many different groups of the population, such as infants, elderly people, athletes, soldiers, drivers, fire-fighters, police etc. This can provide a variety of information ranging from health status to operational readiness. In this article, we focus on the case of first responders and specifically fire-fighters. Firefighters can benefit from a physiological monitoring system that is used to extract multiple indications such as the present position, the possible life risk level, the stress level etc. This work presents a wearable wireless sensor network node, based on low cost, commercial-off- the-self (COTS) electronic modules, which can be easily attached on a standard fire-fighters’ uniform. Due to the low frequency wired interface between the selected electronic components, the proposed solution can be used as a basis for a textile system where all wired connections will be implemented by means of conductive yarn routing in the textile structure, while some of the standard sensors can be replaced by textile ones. System architecture is described in detail, while indicative samples of acquired signals are also presented.
Analysis of JT-60SA operational scenarios
NASA Astrophysics Data System (ADS)
Garzotti, L.; Barbato, E.; Garcia, J.; Hayashi, N.; Voitsekhovitch, I.; Giruzzi, G.; Maget, P.; Romanelli, M.; Saarelma, S.; Stankiewitz, R.; Yoshida, M.; Zagórski, R.
2018-02-01
Reference scenarios for the JT-60SA tokamak have been simulated with one-dimensional transport codes to assess the stationary state of the flat-top phase and provide a profile database for further physics studies (e.g. MHD stability, gyrokinetic analysis) and diagnostics design. The types of scenario considered vary from pulsed standard H-mode to advanced non-inductive steady-state plasmas. In this paper we present the results obtained with the ASTRA, CRONOS, JINTRAC and TOPICS codes equipped with the Bohm/gyro-Bohm, CDBM and GLF23 transport models. The scenarios analysed here are: a standard ELMy H-mode, a hybrid scenario and a non-inductive steady state plasma, with operational parameters from the JT-60SA research plan. Several simulations of the scenarios under consideration have been performed with the above mentioned codes and transport models. The results from the different codes are in broad agreement and the main plasma parameters generally agree well with the zero dimensional estimates reported previously. The sensitivity of the results to different transport models and, in some cases, to the ELM/pedestal model has been investigated.
Dark Matter Decays from Nonminimal Coupling to Gravity.
Catà, Oscar; Ibarra, Alejandro; Ingenhütt, Sebastian
2016-07-08
We consider the standard model extended with a dark matter particle in curved spacetime, motivated by the fact that the only current evidence for dark matter is through its gravitational interactions, and we investigate the impact on the dark matter stability of terms in the Lagrangian linear in the dark matter field and proportional to the Ricci scalar. We show that this "gravity portal" induces decay even if the dark matter particle only has gravitational interactions, and that the decay branching ratios into standard model particles only depend on one free parameter: the dark matter mass. We study in detail the case of a singlet scalar as a dark matter candidate, which is assumed to be absolutely stable in flat spacetime due to a discrete Z_{2} symmetry, but which may decay in curved spacetimes due to a Z_{2}-breaking nonminimal coupling to gravity. We calculate the dark matter decay widths and we set conservative limits on the nonminimal coupling parameter from experiments. The limits are very stringent and suggest that there must exist an additional mechanism protecting the singlet scalar from decaying via this gravity portal.
Neutrino Oscillations:. a Phenomenological Approach
NASA Astrophysics Data System (ADS)
Fogli, G. L.; Lisi, E.; Marrone, A.; Palazzo, A.; Rotunno, A. M.; Montanino, D.
We review the status of the neutrino oscillations physics, with a particular emphasis on the present knowledge of the neutrino mass-mixing parameters. We consider first the νμ → ντ flavor transitions of atmospheric neutrinos. It is found that standard oscillations provide the best description of the SK+K2K data, and that the associated mass-mixing parameters are determined at ±1σ (and NDF = 1) as: Δm2 = (2.6 ± 0.4) × 10-3 eV2 and sin 2 2θ = 1.00{ - 0.05}{ + 0.00} . Such indications, presently dominated by SK, could be strengthened by further K2K data. Then we point out that the recent data from the Sudbury Neutrino Observatory, together with other relevant measurements from solar and reactor neutrino experiments, in particular the KamLAND data, convincingly show that the flavor transitions of solar neutrinos are affected by Mikheyev-Smirnov-Wolfenstein (MSW) effects. Finally, we perform an updated analysis of two-family active oscillations of solar and reactor neutrinos in the standard MSW case.
NASA Technical Reports Server (NTRS)
Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym
2017-01-01
Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.
Dispersion y dinamica poblacional
USDA-ARS?s Scientific Manuscript database
Dispersal behavior of fruit flies is appetitive. Measures of dispersion involve two different parameter: the maximum distance and the standard distance. Standard distance is a parameter that describes the probalility of dispersion and is mathematically equivalent to the standard deviation around ...
Non-Standard Interactions in propagation at the Deep Underground Neutrino Experiment
Coloma, Pilar
2016-03-03
Here, we study the sensitivity of current and future long-baseline neutrino oscillation experiments to the effects of dimension six operators affecting neutrino propagation through Earth, commonly referred to as Non-Standard Interactions (NSI). All relevant parameters entering the oscillation probabilities (standard and non-standard) are considered at once, in order to take into account possible cancellations and degeneracies between them. We find that the Deep Underground Neutrino Experiment will significantly improve over current constraints for most NSI parameters. Most notably, it will be able to rule out the so-called LMA-dark solution, still compatible with current oscillation data, and will be sensitive to off-diagonal NSI parameters at the level of ε ~more » $$ \\mathcal{O} $$ (0.05 – 0.5). We also identify two degeneracies among standard and non-standard parameters, which could be partially resolved by combining T2HK and DUNE data.« less
Determination of service standard time for liquid waste parameter in certification institution
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Kusumawaty, D.
2018-02-01
Baristand Industry Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industry Medan is liquid waste testing service. The company set the standard of service 9 working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company. The purpose of this research is to specify the standard time of each parameter in testing services liquid waste. The method used is the stopwatch time study. There are 45 test parameters in liquid waste laboratory. The measurement of the time done 4 samples per test parameters using the stopwatch. From the measurement results obtained standard time that the standard Minimum Service test of liquid waste is 13 working days if there is testing E. coli.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Morrison, Sarah A; Forrest, Gail F; VanHiel, Leslie R; Davé, Michele; D'Urso, Denise
2012-09-01
To illustrate the continuity of care afforded by a standardized locomotor training program across a multisite network setting within the Christopher and Dana Reeve Foundation NeuroRecovery Network (NRN). Single patient case study. Two geographically different hospital-based outpatient facilities. This case highlights a 25-year-old man diagnosed with C4 motor incomplete spinal cord injury with American Spinal Injury Association Impairment Scale grade D. Standardized locomotor training program 5 sessions per week for 1.5 hours per session, for a total of 100 treatment sessions, with 40 sessions at 1 center and 60 at another. Ten-meter walk test and 6-minute walk test were assessed at admission and discharge across both facilities. For each of the 100 treatment sessions percent body weight support, average, and maximum treadmill speed were evaluated. Locomotor endurance, as measured by the 6-minute walk test, and overground gait speed showed consistent improvement from admission to discharge. Throughout training, the patient decreased the need for body weight support and was able to tolerate faster treadmill speeds. Data indicate that the patient continued to improve on both treatment parameters and walking function. Standardization across the NRN centers provided a mechanism for delivering consistent and reproducible locomotor training programs across 2 facilities without disrupting training or recovery progression. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ELF field in the proximity of complex power line configuration measurement procedures.
Benes, M; Comelli, M; Villalta, R
2006-01-01
The issue of how to measure magnetic induction fields generated by various power line configurations, when there are several power lines that run across the same exposure area, has become a matter of interest and study within the Regional Environment Protection Agency of Friuli Venezia Giulia. In classifying the various power line typologies the definition of double circuit line was given: in this instance the magnetic field is determined by knowing the electrical and geometric parameters of the line. In the case of independent lines instead, the field is undetermined. It is therefore pointed out how, in the latter case, extracting previsional information from a set of measurements of the magnetic field alone is impossible. Making measurements throughout the territory of service has in several cases offered the opportunity to define standard operational procedures.
Comparison between Smoluchowski and Boltzmann approaches for self-propelled rods.
Bertin, Eric; Baskaran, Aparna; Chaté, Hugues; Marchetti, M Cristina
2015-10-01
Considering systems of self-propelled polar particles with nematic interactions ("rods"), we compare the continuum equations describing the evolution of polar and nematic order parameters, derived either from Smoluchowski or Boltzmann equations. Our main goal is to understand the discrepancies between the continuum equations obtained so far in both frameworks. We first show that, in the simple case of point-like particles with only alignment interactions, the continuum equations obtained have the same structure in both cases. We further study, in the Smoluchowski framework, the case where an interaction force is added on top of the aligning torque. This clarifies the origin of the additional terms obtained in previous works. Our observations lead us to emphasize the need for a more involved closure scheme than the standard normal form of the distribution when dealing with active systems.
Fu, Weihua; Zhou, Zhansong; Liu, Shijian; Li, Qianwei; Yao, Jiwei; Li, Weibing; Yan, Junan
2014-01-01
Chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) is one of the risk factors of impaired male fertility potential. Studies have investigated the effect of CP/CPPS on several semen parameters but have shown inconsistent results. Hence, we performed a systematic literature review and meta-analysis to assess the association between CP/CPPS and basic semen parameters in adult men. Systematic literature searches were conducted with PubMed, EMBASE and the Cochrane Library up to August 2013 for case-control studies that involved the impact of CP/CPSS on semen parameters. Meta-analysis was performed with Review Manager and Stata software. Standard mean differences (SMD) of semen parameters were identified with 95% confidence intervals (95% CI) in a random effects model. Twelve studies were identified, including 999 cases of CP/CPPS and 455 controls. Our results illustrated that the sperm concentration and the percentage of progressively motile sperm and morphologically normal sperm from patients with CP/CPPS were significantly lower than controls (SMD (95% CI) -14.12 (-21.69, -6.63), -5.94 (-8.63, -3.25) and -8.26 (-11.83, -4.66), respectively). However, semen volume in the CP/CPPS group was higher than in the control group (SMD (95% CI) 0.50 (0.11, 0.89)). There was no significant effect of CP/CPPS on the total sperm count, sperm total motility, and sperm vitality. The present study illustrates that there was a significant negative effect of CP/CPPS on sperm concentration, sperm progressive motility, and normal sperm morphology. Further studies with larger sample sizes are needed to better illuminate the negative impact of CP/CPPS on semen parameters.
The Value of Data and Metadata Standardization for Interoperability in Giovanni
NASA Astrophysics Data System (ADS)
Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.
2017-12-01
Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.
Updating national standards for drinking-water: a Philippine experience.
Lomboy, M; Riego de Dios, J; Magtibay, B; Quizon, R; Molina, V; Fadrilan-Camacho, V; See, J; Enoveso, A; Barbosa, L; Agravante, A
2017-04-01
The latest version of the Philippine National Standards for Drinking-Water (PNSDW) was issued in 2007 by the Department of Health (DOH). Due to several issues and concerns, the DOH decided to make an update which is relevant and necessary to meet the needs of the stakeholders. As an output, the water quality parameters are now categorized into mandatory, primary, and secondary. The ten mandatory parameters are core parameters which all water service providers nationwide are obligated to test. These include thermotolerant coliforms or Escherichia coli, arsenic, cadmium, lead, nitrate, color, turbidity, pH, total dissolved solids, and disinfectant residual. The 55 primary parameters are site-specific and can be adopted as enforceable parameters when developing new water sources or when the existing source is at high risk of contamination. The 11 secondary parameters include operational parameters and those that affect the esthetic quality of drinking-water. In addition, the updated PNSDW include new sections: (1) reporting and interpretation of results and corrective actions; (2) emergency drinking-water parameters; (3) proposed Sustainable Development Goal parameters; and (4) standards for other drinking-water sources. The lessons learned and insights gained from the updating of standards are likewise incorporated in this paper.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
Piazze, Juan; Dillon, Kathleen Comalli; Albana, Cerekja
2012-01-01
Summary Objective to verify whether there are other than transitory effects of antenatal betamethasone (administered for fetal lung maturity [FLM] enhancement) on fetal heart rate (FHR) variability detected by computerized cardiotocography (cCTG) in cases where formerly steroid-treated fetuses reached term. Materials and methods cCTG of one hundred sixty-four women (study group) exposed to antenatal betamethasone for risk of preterm delivery in third trimester period were compared to controls (pregnancies who presented risk of preterm labour in the same period of cases, although with no steroids administration). cCTG was performed weekly as of standard schedule when pregnancies reach term from 37–40 weeks’ gestation for cases and controls. Results regarding data concerning cCTG at term for cases and controls, no significant difference was found for FHR, Acc (accelerations) 10 min, and FM (fetal movements) between groups. LV (low variation)/min and LV/msec were absent in cCTG parameters of fetuses in the study group. Instead, for all weeks studied (37 to 40), cCTG parameters were higher for HV (high variation)/msec, STV(short term variation)/msec, and Acc 15 in cases with respect to controls. Conclusion interestingly, maternal corticosteroid administration may be related to higher fetal reactivity when fetuses exposed to steroid therapy reach term. Our observation may help in the interpretation of a “more reactive” CTG trace in babies whose mothers previously received steroid therapy for FLM enhancement. PMID:22905307
Early outcome for the primary arterial switch operation beyond the age of 3 weeks.
Ismail, Sameh R; Kabbani, Mohamed S; Najm, Hani K; Abusuliman, Riyadh M; Elbarbary, Mahmoud
2010-07-01
The arterial switch operation (ASO) for neonates is the standard management for transposition of the great arteries (TGA) with an intact ventricular septum (IVS). Patients presenting for late ASO are at risk due to the possibility of left ventricle (LV) involution. This study aimed to assess the early postoperative course and outcome for children with TGA/IVS and still conditioned LV presenting for late primary ASO. A retrospective study of all TGA/IVS patients who underwent a primary ASO between March 2002 and March 2008 was conducted. The cases were divided into two groups. Group A included all the cases of early ASO repaired before the age of 3 weeks, whereas group B included all the preslected cases of late ASO repaired after the age of 3 weeks. The demographics, intensive care unit (ICU) parameters, complications, and short-term outcomes of the two groups were compared. The study enrolled of 91 patients: 64 patients (70%) in group A and 27 patients (30%) in group B. The mean age was 11 +/- 4 days in group A and 37 +/- 17 days in group B (P < 0.001). The two groups showed no significant statistical differences in ICU parameters, complications, or mortality. For patients with TGA/IVS, ASO still can be tolerated beyond the first month of life in selected cases. Provided the LV still is conditioned, age should not be a limitation for ASO.
Tam Tam, Kiran Babu; Dozier, James; Martin, James Nello
2012-04-01
A systematic review of the literature was conducted to answer the following question: are there enhancements to standard peripartum hysterectomy technique that minimize unintentional urinary tract (UT) injury in pregnancies complicated by invasive placental attachment (INPLAT)? A PubMed search of English language articles on INPLAT published by June 2010 was conducted. Data regarding the following parameters was required for inclusion in the quantitative analysis of the review's objective: (1) type of INPLAT, (2) details pertaining to medical and surgical management of INPLAT, and (3) complications, if any, associated with management. An attempt was made to identify approaches that may lower the risk of unintentional UT injury. Most cases (285 of 292) were managed by hysterectomy. There were 83 (29%) cases of unintentional UT injury. Antenatal diagnosis of INPLAT lowered the rate of UT injury (39% vs. 63%; P = 0.04). Information regarding surgical technique or medical management was available for 90 cases; 14 of these underwent a standard hysterectomy technique. Methotrexate treatment and 11 modifications of the surgical technique were associated with 16% unintentional UT injury rate as opposed to 57% for standard hysterectomy (P = 0.002). The use of ureteral stents reduced risk of urologic injury (P = 0.01). Multiple logistic regression analysis identified antenatal diagnosis as the significant predictor of an intact UT. Antenatal diagnosis of INPLAT is paramount to minimize UT injury. Utilization of management modifications identified in this review may reduce urologic injury due to INPLAT.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
Smith, Justin D.
2013-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874
The application of robotics to microlaryngeal laser surgery.
Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M
2015-06-01
To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Britto, Ingrid Schwach Werneck; Sananes, Nicolas; Olutoye, Oluyinka O; Cass, Darrell L; Sangi-Haghpeykar, Haleh; Lee, Timothy C; Cassady, Christopher I; Mehollin-Ray, Amy; Welty, Stephen; Fernandes, Caraciolo; Belfort, Michael A; Lee, Wesley; Ruano, Rodrigo
2015-10-01
The purpose of this study was to evaluate the impact of standardization of the lung-to-head ratio measurements in isolated congenital diaphragmatic hernia on prediction of neonatal outcomes and reproducibility. We conducted a retrospective cohort study of 77 cases of isolated congenital diaphragmatic hernia managed in a single center between 2004 and 2012. We compared lung-to-head ratio measurements that were performed prospectively in our institution without standardization to standardized measurements performed according to a defined protocol. The standardized lung-to-head ratio measurements were statistically more accurate than the nonstandardized measurements for predicting neonatal mortality (area under the receiver operating characteristic curve, 0.85 versus 0.732; P = .003). After standardization, there were no statistical differences in accuracy between measurements regardless of whether we considered observed-to-expected values (P > .05). Standardization of the lung-to-head ratio did not improve prediction of the need for extracorporeal membrane oxygenation (P> .05). Both intraoperator and interoperator reproducibility were good for the standardized lung-to-head ratio (intraclass correlation coefficient, 0.98 [95% confidence interval, 0.97-0.99]; bias, 0.02 [limits of agreement, -0.11 to +0.15], respectively). Standardization of lung-to-head ratio measurements improves prediction of neonatal outcomes. Further studies are needed to confirm these results and to assess the utility of standardization of other prognostic parameters.
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.
Benchmarking antibiotic use in Finnish acute care hospitals using patient case-mix adjustment.
Kanerva, Mari; Ollgren, Jukka; Lyytikäinen, Outi
2011-11-01
It is difficult to draw conclusions about the prudence of antibiotic use in different hospitals by directly comparing usage figures. We present a patient case-mix adjustment model of antibiotic use to rank hospitals while taking patient characteristics into account. Data on antibiotic use were collected during the national healthcare-associated infection (HAI) prevalence survey in 2005 in Finland in all 5 tertiary care, all 15 secondary care and 10 (25% of 40) other acute care hospitals. The use of antibiotics was measured using use-days/100 patient-days during a 7day period and the prevalence of patients receiving at least two antimicrobials during the study day. Case-mix-adjusted antibiotic use was calculated by using multivariate models and an indirect standardization method. Parameters in the model included age, sex, severity of underlying diseases, intensive care, haematology, preceding surgery, respirator, central venous and urinary catheters, community-associated infection, HAI and contact isolation due to methicillin-resistant Staphylococcus aureus. The ranking order changed one position in 12 (40%) hospitals and more than two positions in 13 (43%) hospitals when the case-mix-adjusted figures were compared with those observed. In 24 hospitals (80%), the antibiotic use density observed was lower than expected by the case-mix-adjusted use density. The patient case-mix adjustment of antibiotic use ranked the hospitals differently from the ranking according to observed use, and may be a useful tool for benchmarking hospital antibiotic use. However, the best set of easily and widely available parameters that would describe both patient material and hospital activities remains to be determined.
Finite density two color chiral perturbation theory revisited
NASA Astrophysics Data System (ADS)
Adhikari, Prabal; Beleznay, Soma B.; Mannarelli, Massimo
2018-06-01
We revisit two-color, two-flavor chiral perturbation theory at finite isospin and baryon density. We investigate the phase diagram obtained varying the isospin and the baryon chemical potentials, focusing on the phase transition occurring when the two chemical potentials are equal and exceed the pion mass (which is degenerate with the diquark mass). In this case, there is a change in the order parameter of the theory that does not lend itself to the standard picture of first order transitions. We explore this phase transition both within a Ginzburg-Landau framework valid in a limited parameter space and then by inspecting the full chiral Lagrangian in all the accessible parameter space. Across the phase transition between the two broken phases the order parameter becomes an SU(2) doublet, with the ground state fixing the expectation value of the sum of the magnitude squared of the pion and the diquark fields. Furthermore, we find that the Lagrangian at equal chemical potentials is invariant under global SU(2) transformations and construct the effective Lagrangian of the three Goldstone degrees of freedom by integrating out the radial fluctuations.
NASA Technical Reports Server (NTRS)
Kiselyov, Oleg; Fisher, Paul
1995-01-01
This paper presents a case study of integration of compression techniques within a satellite image communication component of an actual tactical weather information dissemination system. The paper describes history and requirements of the project, and discusses the information flow, request/reply protocols, error handling, and, especially, system integration issues: specification of compression parameters and the place and time for compressor/decompressor plug-ins. A case for a non-uniform compression of satellite imagery is presented, and its implementation in the current system id demonstrated. The paper gives special attention to challenges of moving the system towards the use of standard, non-proprietary protocols (smtp and http) and new technologies (OpenDoc), and reports the ongoing work in this direction.
Relationship between primary lesion metabolic parameters and clinical stage in lung cancer.
Sahiner, I; Atasever, T; Akdemir, U O; Ozturk, C; Memis, L
2013-01-01
The relation of PET-derived parameters as maximum standardized uptake value (SUVmax), total lesion glycolysis (TLG), metabolic tumor volume (MTV) with clinical stage in lung cancer and correlation of SUVmax of primary tumor and that of metastatic lesion was studied in lung cancer patients. Patients with lung cancer who were referred for FDG PET/CT were included in the study. PET/CT scans and pathology reports of 168 patients were assessed. A total of 146 (86.9%) of these patients had a diagnosis of non-small cell lung cancer (NSCLC) and 22 (13.1%) had small cell lung cancer (SCLC). Metabolic parameters such as SUVmax, TLG and MTV showed significant differences in all the stages in NSCLC patients (p<0.001). However, after tumors sizes <25 mm were excluded, no significant differences in SUVmax between stages were observed. No significant differences were found between these metabolic parameters and limited or extended disease SCLC. Tumor diameter correlated with primary tumor SUVmax and significant correlations between primary lesion SUVmax and metastatic lesion SUVmax were found. Although differences were found regarding indices between stages of NSCLC cases, SUVmax differences between stages seem to be caused by underestimation of SUVmax in small lesions. Other glucose metabolism indexes such as MTV and TLG show promising results in terms of prognostic stratification. Future studies are needed for better understanding of their contribution to clinical cases. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.
Constraining brane tension using rotation curves of galaxies
NASA Astrophysics Data System (ADS)
García-Aspeitia, Miguel A.; Rodríguez-Meza, Mario A.
2018-04-01
We present in this work a study of brane theory phenomenology focusing on the brane tension parameter, which is the main observable of the theory. We show the modifications steaming from the presence of branes in the rotation curves of spiral galaxies for three well known dark matter density profiles: Pseudo isothermal, Navarro-Frenk-White and Burkert dark matter density profiles. We estimate the brane tension parameter using a sample of high resolution observed rotation curves of low surface brightness spiral galaxies and a synthetic rotation curve for the three density profiles. Also, the fittings using the brane theory model of the rotation curves are compared with standard Newtonian models. We found that Navarro-Frenk-White model prefers lower values of the brane tension parameter, on the average λ ∼ 0.73 × 10‑3eV4, therefore showing clear brane effects. Burkert case does prefer higher values of the tension parameter, on the average λ ∼ 0.93 eV4 ‑ 46 eV4, i.e., negligible brane effects. Whereas pseudo isothermal is an intermediate case. Due to the low densities found in the galactic medium it is almost impossible to find evidence of the presence of extra dimensions. In this context, we found that our results show weaker bounds to the brane tension values in comparison with other bounds found previously, as the lower value found for dwarf stars composed of a polytropic equation of state, λ ≈ 104 MeV4.
NASA Technical Reports Server (NTRS)
Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue
2009-01-01
We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.
Rigolin, Gian Matteo; Cibien, Francesca; Martinelli, Sara; Formigaro, Luca; Rizzotto, Lara; Tammiso, Elisa; Saccenti, Elena; Bardi, Antonella; Cavazzini, Francesco; Ciccone, Maria; Nichele, Ilaria; Pizzolo, Giovanni; Zaja, Francesco; Fanin, Renato; Galieni, Piero; Dalsass, Alessia; Mestichelli, Francesca; Testa, Nicoletta; Negrini, Massimo; Cuneo, Antonio
2012-03-08
It is unclear whether karyotype aberrations that occur in regions uncovered by the standard fluorescence in situ hybridization (FISH) panel have prognostic relevance in chronic lymphocytic leukemia (CLL). We evaluated the significance of karyotypic aberrations in a learning cohort (LC; n = 64) and a validation cohort (VC; n = 84) of patients with chronic lymphocytic leukemia with "normal" FISH. An abnormal karyotype was found in 21.5% and 35.7% of cases in the LC and VC, respectively, and was associated with a lower immunophenotypic score (P = .030 in the LC, P = .035 in the VC), advanced stage (P = .040 in the VC), and need for treatment (P = .002 in the LC, P = < .0001 in the VC). The abnormal karyotype correlated with shorter time to first treatment and shorter survival in both the LC and the VC, representing the strongest prognostic parameter. In patients with chronic lymphocytic leukemia with normal FISH, karyotypic aberrations by conventional cytogenetics with novel mitogens identify a subset of cases with adverse prognostic features.
Jangam, Chandrakant; Ramya Sanam, S; Chaturvedi, M K; Padmakar, C; Pujari, Paras R; Labhasetwar, Pawan K
2015-10-01
The present case study has been undertaken to investigate the impact of on-site sanitation on groundwater quality in alluvial settings in Lucknow City in India. The groundwater samples have been collected in the areas of Lucknow City where the on-site sanitation systems have been implemented. The groundwater samples have been analyzed for the major physicochemical parameters and fecal coliform. The results of analysis reveal that none of the groundwater samples exceeded the Bureau of Indian Standards (BIS) limits for all the parameters. Fecal coliform was not found in majority of the samples including those samples which were very close to the septic tank. The study area has a thick alluvium cover as a top layer which acts as a natural barrier for groundwater contamination from the on-site sanitation system. The t test has been performed to assess the seasonal effect on groundwater quality. The statistical t test implies that there is a significant effect of season on groundwater quality in the study area.
Injectional anthrax at a Scottish district general hospital.
Inverarity, D J; Forrester, V M; Cumming, J G R; Paterson, P J; Campbell, R J; Brooks, T J G; Carson, G L; Ruddy, J P
2015-04-01
This retrospective, descriptive case-series reviews the clinical presentations and significant laboratory findings of patients diagnosed with and treated for injectional anthrax (IA) since December 2009 at Monklands Hospital in Central Scotland and represents the largest series of IA cases to be described from a single location. Twenty-one patients who fulfilled National Anthrax Control Team standardized case definitions of confirmed, probable or possible IA are reported. All cases survived and none required limb amputation in contrast to an overall mortality of 28% being experienced for this condition in Scotland. We document the spectrum of presentations of soft tissue infection ranging from mild cases which were managed predominantly with oral antibiotics to severe cases with significant oedema, organ failure and coagulopathy. We describe the surgical management, intensive care management and antibiotic management including the first description of daptomycin being used to treat human anthrax. It is noted that some people who had injected heroin infected with Bacillus anthracis did not develop evidence of IA. Also highlighted are biochemical and haematological parameters which proved useful in identifying deteriorating patients who required greater levels of support and surgical debridement.
QSPR modeling: graph connectivity indices versus line graph connectivity indices
Basak; Nikolic; Trinajstic; Amic; Beslo
2000-07-01
Five QSPR models of alkanes were reinvestigated. Properties considered were molecular surface-dependent properties (boiling points and gas chromatographic retention indices) and molecular volume-dependent properties (molar volumes and molar refractions). The vertex- and edge-connectivity indices were used as structural parameters. In each studied case we computed connectivity indices of alkane trees and alkane line graphs and searched for the optimum exponent. Models based on indices with an optimum exponent and on the standard value of the exponent were compared. Thus, for each property we generated six QSPR models (four for alkane trees and two for the corresponding line graphs). In all studied cases QSPR models based on connectivity indices with optimum exponents have better statistical characteristics than the models based on connectivity indices with the standard value of the exponent. The comparison between models based on vertex- and edge-connectivity indices gave in two cases (molar volumes and molar refractions) better models based on edge-connectivity indices and in three cases (boiling points for octanes and nonanes and gas chromatographic retention indices) better models based on vertex-connectivity indices. Thus, it appears that the edge-connectivity index is more appropriate to be used in the structure-molecular volume properties modeling and the vertex-connectivity index in the structure-molecular surface properties modeling. The use of line graphs did not improve the predictive power of the connectivity indices. Only in one case (boiling points of nonanes) a better model was obtained with the use of line graphs.
Ratzinger, Franz; Schuardt, Michael; Eichbichler, Katherina; Tsirkinidou, Irene; Bauer, Marlene; Haslacher, Helmuth; Mitteregger, Dieter; Binder, Michael; Burgmann, Heinz
2013-01-01
Physicians are regularly faced with severely ill patients at risk of developing infections. In literature, standard care wards are often neglected, although their patients frequently suffer from a systemic inflammatory response syndrome (SIRS) of unknown origin. Fast identification of patients with infections is vital, as they immediately require appropriate therapy. Further, tools with a high negative predictive value (NPV) to exclude infection or bacteremia are important to increase the cost effectiveness of microbiological examinations and to avoid inappropriate antibiotic treatment. In this prospective cohort study, 2,384 patients with suspected infections were screened for suffering from two or more SIRS criteria on standard care wards. The infection probability score (IPS) and sepsis biomarkers with discriminatory power were assessed regarding their capacity to identify infection or bacteremia. In this cohort finally consisting of 298 SIRS-patients, the infection prevalence was 72%. Bacteremia was found in 25% of cases. For the prediction of infection, the IPS yielded 0.51 ROC-AUC (30.1% sensitivity, 64.6% specificity). Among sepsis biomarkers, lipopolysaccharide binding protein (LBP) was the best parameter with 0.63 ROC-AUC (57.5% sensitivity, 67.1% specificity). For the prediction of bacteremia, the IPS performed slightly better with a ROC-AUC of 0.58 (21.3% sensitivity, 65% specificity). Procalcitonin was the best discriminator with 0.78 ROC-AUC, 86.3% sensitivity, 59.6% specificity and 92.9% NPV. Furthermore, bilirubin and LBP (ROC-AUC: 0.65, 0.62) might also be considered as useful parameters. In summary, the IPS and widely used infection parameters, including CRP or WBC, yielded a poor diagnostic performance for the detection of infection or bacteremia. Additional sepsis biomarkers do not aid in discriminating inflammation from infection. For the prediction of bacteremia procalcitonin, and bilirubin were the most promising parameters, which might be used as a rule for when to take blood cultures or using nucleic acid amplification tests for microbiological diagnostics.
Scaling Linguistic Characterization of Precipitation Variability
NASA Astrophysics Data System (ADS)
Primo, C.; Gutierrez, J. M.
2003-04-01
Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Dark matter direct detection of a fermionic singlet at one loop
NASA Astrophysics Data System (ADS)
Herrero-García, Juan; Molinaro, Emiliano; Schmidt, Michael A.
2018-06-01
The strong direct detection limits could be pointing to dark matter - nucleus scattering at loop level. We study in detail the prototype example of an electroweak singlet (Dirac or Majorana) dark matter fermion coupled to an extended dark sector, which is composed of a new fermion and a new scalar. Given the strong limits on colored particles from direct and indirect searches we assume that the fields of the new dark sector are color singlets. We outline the possible simplified models, including the well-motivated cases in which the extra scalar or fermion is a Standard Model particle, as well as the possible connection to neutrino masses. We compute the contributions to direct detection from the photon, the Z and the Higgs penguins for arbitrary quantum numbers of the dark sector. Furthermore, we derive compact expressions in certain limits, i.e., when all new particles are heavier than the dark matter mass and when the fermion running in the loop is light, like a Standard Model lepton. We study in detail the predicted direct detection rate and how current and future direct detection limits constrain the model parameters. In case dark matter couples directly to Standard Model leptons we find an interesting interplay between lepton flavor violation, direct detection and the observed relic abundance.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
The Classification of Universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjorken, J
2004-04-09
We define a universe as the contents of a spacetime box with comoving walls, large enough to contain essentially all phenomena that can be conceivably measured. The initial time is taken as the epoch when the lowest CMB modes undergo horizon crossing, and the final time taken when the wavelengths of CMB photons are comparable with the Hubble scale, i.e. with the nominal size of the universe. This allows the definition of a local ensemble of similarly constructed universes, using only modest extrapolations of the observed behavior of the cosmos. We then assume that further out in spacetime, similar universesmore » can be constructed but containing different standard model parameters. Within this multiverse ensemble, it is assumed that the standard model parameters are strongly correlated with size, i.e. with the value of the inverse Hubble parameter at the final time, in a manner as previously suggested. This allows an estimate of the range of sizes which allow life as we know it, and invites a speculation regarding the most natural distribution of sizes. If small sizes are favored, this in turn allows some understanding of the hierarchy problems of particle physics. Subsequent sections of the paper explore other possible implications. In all cases, the approach is as bottoms up and as phenomenological as possible, and suggests that theories of the multiverse so constructed may in fact lay some claim of being scientific.« less
NASA Astrophysics Data System (ADS)
Merka, J.; Dolan, C. F.
2015-12-01
Finding and retrieving space physics data is often a complicated taskeven for publicly available data sets: Thousands of relativelysmall and many large data sets are stored in various formats and, inthe better case, accompanied by at least some documentation. VirtualHeliospheric and Magnetospheric Observatories (VHO and VMO) help researches by creating a single point of uniformdiscovery, access, and use of heliospheric (VHO) and magnetospheric(VMO) data.The VMO and VHO functionality relies on metadata expressed using theSPASE data model. This data model is developed by the SPASE WorkingGroup which is currently the only international group supporting globaldata management for Solar and Space Physics. The two Virtual Observatories(VxOs) have initiated and lead a development of a SPASE-related standardnamed SPASE Query Language for provided a standard way of submittingqueries and receiving results.The VMO and VHO use SPASE and SPASEQL for searches based on various criteria such as, for example, spatial location, time of observation, measurement type, parameter values, etc. The parameter values are represented by their statisticalestimators calculated typically over 10-minute intervals: mean, median, standard deviation, minimum, and maximum. The use of statistical estimatorsenables science driven data queries that simplify and shorten the effort tofind where and/or how often the sought phenomenon is observed, as we will present.
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Evaluation of cluster expansions and correlated one-body properties of nuclei
NASA Astrophysics Data System (ADS)
Moustakidis, Ch. C.; Massen, S. E.; Panos, C. P.; Grypeos, M. E.; Antonov, A. N.
2001-07-01
Three different cluster expansions for the evaluation of correlated one-body properties of s-p and s-d shell nuclei are compared. Harmonic oscillator wave functions and Jastrow-type correlations are used, while analytical expressions are obtained for the charge form factor, density distribution, and momentum distribution by truncating the expansions and using a standard Jastrow correlation function f. The harmonic oscillator parameter b and the correlation parameter β have been determined by a least-squares fit to the experimental charge form factors in each case. The information entropy of nuclei in position space (Sr) and momentum space (Sk) according to the three methods are also calculated. It is found that the larger the entropy sum, S=Sr+Sk (the net information content of the system), the smaller the values of χ2. This indicates that maximal S is a criterion of the quality of a given nuclear model, according to the maximum entropy principle. Only two exceptions to this rule, out of many cases examined, were found. Finally an analytic expression for the so-called ``healing'' or ``wound'' integrals is derived with the function f considered, for any state of the relative two-nucleon motion, and their values in certain cases are computed and compared.
Electro-deposition painting process improvement of cab truck by Six Sigma concept
NASA Astrophysics Data System (ADS)
Kawitu, Kitiya; Chutima, Parames
2017-06-01
The case study company is a manufacturer of trucks and currently facing a high rework cost due to the thickness of the electro-deposited paint (EDP) of the truck cab is lower than standard. In addition, the process capability is very low. The Six Sigma concept consisting of 5 phases (DMAIC) is applied to determine new parameter settings for each significant controllable factor. After the improvement, EDP thickness of the truck cab increases from 17.88μ to 20μ (i.e. standard = 20 ± 3μ). Moreover, the process capability indexes (Cp and Cpk) are increased from 0.9 to 1.43, and from 0.27 to 1.43, respectively. This improvement could save the rework cost about 1.6M THB per year.
Chest CT in children: anesthesia and atelectasis.
Newman, Beverley; Krane, Elliot J; Gawande, Rakhee; Holmes, Tyson H; Robinson, Terry E
2014-02-01
There has been an increasing tendency for anesthesiologists to be responsible for providing sedation or anesthesia during chest CT imaging in young children. Anesthesia-related atelectasis noted on chest CT imaging has proven to be a common and troublesome problem, affecting image quality and diagnostic sensitivity. To evaluate the safety and effectiveness of a standardized anesthesia, lung recruitment, controlled-ventilation technique developed at our institution to prevent atelectasis for chest CT imaging in young children. Fifty-six chest CT scans were obtained in 42 children using a research-based intubation, lung recruitment and controlled-ventilation CT scanning protocol. These studies were compared with 70 non-protocolized chest CT scans under anesthesia taken from 18 of the same children, who were tested at different times, without the specific lung recruitment and controlled-ventilation technique. Two radiology readers scored all inspiratory chest CT scans for overall CT quality and atelectasis. Detailed cardiorespiratory parameters were evaluated at baseline, and during recruitment and inspiratory imaging on 21 controlled-ventilation cases and 8 control cases. Significant differences were noted between groups for both quality and atelectasis scores with optimal scoring demonstrated in the controlled-ventilation cases where 70% were rated very good to excellent quality scans compared with only 24% of non-protocol cases. There was no or minimal atelectasis in 48% of the controlled ventilation cases compared to 51% of non-protocol cases with segmental, multisegmental or lobar atelectasis present. No significant difference in cardiorespiratory parameters was found between controlled ventilation and other chest CT cases and no procedure-related adverse events occurred. Controlled-ventilation infant CT scanning under general anesthesia, utilizing intubation and recruitment maneuvers followed by chest CT scans, appears to be a safe and effective method to obtain reliable and reproducible high-quality, motion-free chest CT images in children.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, J; Thomas, E; Wu, X
2016-06-15
Purpose: Single-isocenter VMAT has been shown able to create high quality plans for complex intracranial multiple metastasis SRS cases. Linacs capable of the technique are typically outfitted with an MLC that consists of a combination of 5 mm and 10 mm leaves (standard) or 2.5 mm and 5 mm leaves (high-definition). In this study, we test the hypothesis that thinner collimator leaves are associated with improved plan quality. Methods: Ten multiple metastasis cases were identified and planned for VMAT SRS using a 10 MV flattening filter free beam. Plans were created for a standard (std) and a high-definition (HD) MLC.more » Published values for leaf transmission factor and dosimetric leaf gap were utilized. All other parameters were invariant. Conformity (plan and individual target), moderate isodose spill (V50%), and low isodose spill (mean brain dose) were selected for analysis. Results: Compared to standard MLC, HD-MLC improved overall plan conformity (median: Paddick CI-HD = 0.83, Paddick CI-std = 0.79; p = 0.004 and median: RTOG CI-HD =1.18, RTOG CI-std =1.24; p = 0.01 ), improved individual lesion conformity (median: Paddick CI-HD,i =0.77, Paddick CI-std,i =0.72; p < 0.001 and median: RTOG CI-HD,i = 1.28, RTOG CI-std,i =1.35; p < 0.001), improved moderate isodose spill (median: V50%-HD = 37.0 cc, V50%-std = 45.7 cc; p = 0.002), and improved low dose spill (median: dmean-HD = 2.90 Gy, dmean-std = 3.19 Gy; p = 0.002). Conclusion: For the single-isocenter VMAT SRS of multiple metastasis plans examined, use of HD-MLC modestly improved conformity, moderate isodose, and low isodose spill compared to standard MLC. However, in all cases we were able to generate clinically acceptable plans with the standard MLC. More work is need to further quantify the difference in cases with higher numbers of small targets and to better understand any potential clinical significance. This research was supported in part by Varian Medical Systems.« less
Targeted methods for quantitative analysis of protein glycosylation
Goldman, Radoslav; Sanda, Miloslav
2018-01-01
Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218
Cartographic projection procedures for the UNIX environment; a user's manual
Evenden, Gerald I.
1990-01-01
A tutorial description of the general usage of the cartographic projection program proj (release 3) along with specic cartographic parameters and illustrations of the ap- proximately 70 cartographic projections supported by the program is presented. The program is designed as a standard Unix lter utility to be employed with other pro- grams in the generation of maps and charts and, in many cases, used in map digitizing applications. Tables and shell scripts are also provided for conversion of State Plane Coordinate Systems to and from geographic coordinates.
Regions of attraction and ultimate boundedness for linear quadratic regulators with nonlinearities
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1984-01-01
The closed-loop stability of multivariable linear time-invariant systems controlled by optimal linear quadratic (LQ) regulators is investigated for the case when the feedback loops have nonlinearities N(sigma) that violate the standard stability condition, sigma N(sigma) or = 0.5 sigma(2). The violations of the condition are assumed to occur either (1) for values of sigma away from the origin (sigma = 0) or (2) for values of sigma in a neighborhood of the origin. It is proved that there exists a region of attraction for case (1) and a region of ultimate boundedness for case (2), and estimates are obtained for these regions. The results provide methods for selecting the performance function parameters to design LQ regulators with better tolerance to nonlinearities. The results are demonstrated by application to the problem of attitude and vibration control of a large, flexible space antenna in the presence of actuator nonlinearities.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false How do I establish a valid parameter range if I have chosen to continuously monitor parameters? 60.4410 Section 60.4410 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of...
Modelling non-linear effects of dark energy
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis
2018-04-01
We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.
Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H
2011-06-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Mishra, Soumya Ranjan; Pradhan, Rudra Pratap; Prusty, B Anjan Kumar; Sahu, Sanjat Kumar
2016-07-01
The ambient air quality (AAQ) assessment was undertaken in Sukinda Valley, the chromite hub of India. The possible correlations of meteorological variables with different air quality parameters (PM10, PM2.5, SO2, NO2 and CO) were examined. Being the fourth most polluted area in the globe, Sukinda Valley has always been under attention of researchers, for hexavalent chromium contamination of water. The monitoring was carried out from December 2013 through May 2014 at six strategic locations in the residential and commercial areas around the mining cluster of Sukinda Valley considering the guidelines of Central Pollution Control Board (CPCB). In addition, meteorological parameters viz., temperature, relative humidity, wind speed, wind direction and rainfall, were also monitored. The air quality data were subjected to a general linear model (GLM) coupled with one-way analysis of variance (ANOVA) test for testing the significant difference in the concentration of various parameters among seasons and stations. Further, a two-tailed Pearson's correlation test helped in understanding the influence of meteorological parameters on dispersion of pollutants in the area. All the monitored air quality parameters varied significantly among the monitoring stations suggesting (i) the distance of sampling location to the mine site and other allied activities, (ii) landscape features and topography and (iii) meteorological parameters to be the forcing functions. The area was highly polluted with particulate matters, and in most of the cases, the PM level exceeded the National Ambient Air Quality Standards (NAAQS). The meteorological parameters seemed to play a major role in the dispersion of pollutants around the mine clusters. The role of wind direction, wind speed and temperature was apparent in dispersion of the particulate matters from their source of generation to the surrounding residential and commercial areas of the mine.
NASA Astrophysics Data System (ADS)
Tiberi, Lara; Costa, Giovanni
2017-04-01
The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.
NASA Astrophysics Data System (ADS)
Giribet, Gaston; Oliva, Julio; Tempo, David; Troncoso, Ricardo
2009-12-01
Asymptotically anti-de Sitter rotating black holes for the Bergshoeff-Hohm-Townsend massive gravity theory in three dimensions are considered. In the special case when the theory admits a unique maximally symmetric solution, apart from the mass and the angular momentum, the black hole is described by an independent “gravitational hair” parameter, which provides a negative lower bound for the mass. This bound is saturated at the extremal case, and since the temperature and the semiclassical entropy vanish, it is naturally regarded as the ground state. The absence of a global charge associated with the gravitational hair parameter reflects itself through the first law of thermodynamics in the fact that the variation of this parameter can be consistently reabsorbed by a shift of the global charges, giving further support to consider the extremal case as the ground state. The rotating black hole fits within relaxed asymptotic conditions as compared with the ones of Brown and Henneaux, such that they are invariant under the standard asymptotic symmetries spanned by two copies of the Virasoro generators, and the algebra of the conserved charges acquires a central extension. Then it is shown that Strominger’s holographic computation for general relativity can also be extended to the Bergshoeff-Hohm-Townsend theory; i.e., assuming that the quantum theory could be consistently described by a dual conformal field theory at the boundary, the black hole entropy can be microscopically computed from the asymptotic growth of the number of states according to Cardy’s formula, in exact agreement with the semiclassical result.
qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-01-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958
qcML: an exchange format for quality control metrics from mass spectrometry experiments.
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-08-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
New axion and hidden photon constraints from a solar data global fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinyoles, N.; Serenelli, A.; Isern, J.
2015-10-01
We present a new statistical analysis that combines helioseismology (sound speed, surface helium and convective radius) and solar neutrino observations (the {sup 8}B and {sup 7}Be fluxes) to place upper limits to the properties of non standard weakly interacting particles. Our analysis includes theoretical and observational errors, accounts for tensions between input parameters of solar models and can be easily extended to include other observational constraints. We present two applications to test the method: the well studied case of axions and axion-like particles and the more novel case of low mass hidden photons. For axions we obtain an upper limitmore » at 3σ for the axion-photon coupling constant of g{sub aγ} < 4.1 · 10{sup −10} GeV{sup −1}. For hidden photons we obtain the most restrictive upper limit available accross a wide range of masses for the product of the kinetic mixing and mass of χ m < 1.8 ⋅ 10{sup −12} eV at 3σ. Both cases improve the previous solar constraints based on the Standard Solar Models showing the power of using a global statistical approach.« less
Xiong, Jianyin; Yao, Yuan; Zhang, Yinping
2011-04-15
The initial emittable concentration (C(m,0)), the diffusion coefficient (D(m)), and the material/air partition coefficient (K) are the three characteristic parameters influencing emissions of formaldehyde and volatile organic compounds (VOCs) from building materials or furniture. It is necessary to determine these parameters to understand emission characteristics and how to control them. In this paper we develop a new method, the C-history method for a closed chamber, to measure these three parameters. Compared to the available methods of determining the three parameters described in the literature, our approach has the following salient features: (1) the three parameters can be simultaneously obtained; (2) it is time-saving, generally taking less than 3 days for the cases studied (the available methods tend to need 7-28 days); (3) the maximum relative standard deviations of the measured C(m,0), D(m) and K are 8.5%, 7.7%, and 9.8%, respectively, which are acceptable for engineering applications. The new method was validated by using the characteristic parameters determined in the closed chamber experiment to predict the observed emissions in a ventilated full scale chamber experiment, proving that the approach is reliable and convincing. Our new C-history method should prove useful for rapidly determining the parameters required to predict formaldehyde and VOC emissions from building materials as well as for furniture labeling.
Ultrasound-guided thoracenthesis: the V-point as a site for optimal drainage positioning.
Zanforlin, A; Gavelli, G; Oboldi, D; Galletti, S
2013-01-01
In the latest years the use of lung ultrasound is increasing in the evaluation of pleural effusions, because it makes follow-up easier and drainage more efficient by providing guidance on the most appropriate sampling site. However, no standardized approach for ultrasound-guided thoracenthesis is actually available. To evaluate our usual ultrasonographic landmark as a possible standard site to perform thoracenthesis by assessing its value in terms of safety and efficiency (success at first attempt, drainage as complete as possible). Hospitalized patients with non organized pleural effusion underwent thoracenthesis after ultrasound evaluation. The point showing on ultrasound the maximum thickness of the effusion ("V-point") was chosen for drainage. 45 ultrasound guided thoracenthesis were performed in 12 months. In 22 cases there were no complications; 16 cases of cough, 2 cases of mild dyspnea without desaturation, 4 cases of mild pain; 2 cases of complications requiring medical intervention occurred. No case of pneumothorax related to the procedure was detected. In all cases drainage was successful on the first attempt. The collected values of maximum thickness at V-point (min 3.4 cm - max 15.3 cm) and drained fluid volume (min 70 ml - max 2000 ml) showed a significative correlation (p < 0.0001). When the thickness was greater or equal to 9.9 cm, drained volume was always more than 1000 ml. The measure of the maximum thickness at V-point provides high efficiency to ultrasound guided thoracentesis and allows to estimate the amount of fluid in the pleural cavity. It is also an easy parameter that makes the proposed method quick to learn and apply.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Owens, A. R.; Kópházi, J.; Eaton, M. D.
2017-12-01
In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.
NASA Astrophysics Data System (ADS)
Forooghi, Pourya; Stroh, Alexander; Schlatter, Philipp; Frohnapfel, Bettina
2018-04-01
Direct numerical simulations are used to investigate turbulent flow in rough channels, in which topographical parameters of the rough wall are systematically varied at a fixed friction Reynolds number of 500, based on a mean channel half-height h and friction velocity. The utilized roughness generation approach allows independent variation of moments of the surface height probability distribution function [thus root-mean-square (rms) surface height, skewness, and kurtosis], surface mean slope, and standard deviation of the roughness peak sizes. Particular attention is paid to the effect of the parameter Δ defined as the normalized height difference between the highest and lowest roughness peaks. This parameter is used to understand the trends of the investigated flow variables with departure from the idealized case where all roughness elements have the same height (Δ =0 ). All calculations are done in the fully rough regime and for surfaces with high slope (effective slope equal to 0.6-0.9). The rms roughness height is fixed for all cases at 0.045 h and the skewness and kurtosis of the surface height probability density function vary in the ranges -0.33 to 0.67 and 1.9 to 2.6, respectively. The goal of the paper is twofold: first, to investigate the possible effect of topographical parameters on the mean turbulent flow, Reynolds, and dispersive stresses particularly in the vicinity of the roughness crest, and second, to investigate the possibility of using the wall-normal turbulence intensity as a physical parameter for parametrization of the flow. Such a possibility, already suggested for regular roughness in the literature, is here extended to irregular roughness.
Yamashita, Shozo; Yokoyama, Kunihiko; Onoguchi, Masahisa; Yamamoto, Haruki; Hiko, Shigeaki; Horita, Akihiro; Nakajima, Kenichi
2014-01-01
Deep-inspiration breath-hold (DIBH) PET/CT with short-time acquisition and respiratory-gated (RG) PET/CT are performed for pulmonary lesions to reduce the respiratory motion artifacts, and to obtain more accurate standardized uptake value (SUV). DIBH PET/CT demonstrates significant advantages in terms of rapid examination, good quality of CT images and low radiation exposure. On the other hand, the image quality of DIBH PET is generally inferior to that of RG PET because of short-time acquisition resulting in poor signal-to-noise ratio. In this study, RG PET has been regarded as a gold standard, and its detectability between DIBH and RG PET studies was compared using each of the most optimal reconstruction parameters. In the phantom study, the most optimal reconstruction parameters for DIBH and RG PET were determined. In the clinical study, 19 cases were examined using each of the most optimal reconstruction parameters. In the phantom study, the most optimal reconstruction parameters for DIBH and RG PET were different. Reconstruction parameters of DIBH PET could be obtained by reducing the number of subsets for those of RG PET in the state of fixing the number of iterations. In the clinical study, high correlation in the maximum SUV was observed between DIBH and RG PET studies. The clinical result was consistent with that of the phantom study surrounded by air since most of the lesions were located in the low pulmonary radioactivity. DIBH PET/CT may be the most practical method which can be the first choice to reduce respiratory motion artifacts if the detectability of DIBH PET is equivalent with that of RG PET. Although DIBH PET may have limitations in suboptimal signal-to-noise ratio, most of the lesions surrounded by low background radioactivity could provide nearly equivalent image quality between DIBH and RG PET studies when each of the most optimal reconstruction parameters was used.
The role of ultrasound guidance in pediatric caudal block
Erbüyün, Koray; Açıkgöz, Barış; Ok, Gülay; Yılmaz, Ömer; Temeltaş, Gökhan; Tekin, İdil; Tok, Demet
2016-01-01
Objectives: To compare the time interval of the procedure, possible complications, post-operative pain levels, additional analgesics, and nurse satisfaction in ultrasonography-guided and standard caudal block applications. Methods: This retrospective study was conducted in Celal Bayar University Hospital, Manisa, Turkey, between January and December 2014, included 78 pediatric patients. Caudal block was applied to 2 different groups; one with ultrasound guide, and the other using the standard method. Results: The time interval of the procedure was significantly shorter in the standard application group compared with ultrasound-guided group (p=0.020). Wong-Baker FACES Pain Rating Scale values obtained at the 90th minute was statistically lower in the standard application group compared with ultrasound-guided group (p=0.035). No statistically significant difference was found on the other parameters between the 2 groups. The shorter time interval of the procedure at standard application group should not be considered as a distinctive mark by the pediatric anesthesiologists, because this time difference was as short as seconds. Conclusion: Ultrasound guidance for caudal block applications would neither increase nor decrease the success of the treatment. However, ultrasound guidance should be needed in cases where the detection of sacral anatomy is difficult, especially by palpations. PMID:26837396
The need for LWR metrology standardization: the imec roughness protocol
NASA Astrophysics Data System (ADS)
Lorusso, Gian Francesco; Sutani, Takumichi; Rutigliani, Vito; van Roey, Frieda; Moussa, Alain; Charley, Anne-Laure; Mack, Chris; Naulleau, Patrick; Constantoudis, Vassilios; Ikota, Masami; Ishimoto, Toru; Koshihara, Shunsuke
2018-03-01
As semiconductor technology keeps moving forward, undeterred by the many challenges ahead, one specific deliverable is capturing the attention of many experts in the field: Line Width Roughness (LWR) specifications are expected to be less than 2nm in the near term, and to drop below 1nm in just a few years. This is a daunting challenge and engineers throughout the industry are trying to meet these targets using every means at their disposal. However, although current efforts are surely admirable, we believe they are not enough. The fact is that a specification has a meaning only if there is an agreed methodology to verify if the criterion is met or not. Such a standardization is critical in any field of science and technology and the question that we need to ask ourselves today is whether we have a standardized LWR metrology or not. In other words, if a single reference sample were provided, would everyone measuring it get reasonably comparable results? We came to realize that this is not the case and that the observed spread in the results throughout the industry is quite large. In our opinion, this makes the comparison of LWR data among institutions, or to a specification, very difficult. In this paper, we report the spread of measured LWR data across the semiconductor industry. We investigate the impact of image acquisition, measurement algorithm, and frequency analysis parameters on LWR metrology. We review critically some of the International Technology Roadmap for Semiconductors (ITRS) metrology guidelines (such as measurement box length larger than 2μm and the need to correct for SEM noise). We compare the SEM roughness results to AFM measurements. Finally, we propose a standardized LWR measurement protocol - the imec Roughness Protocol (iRP) - intended to ensure that every time LWR measurements are compared (from various sources or to specifications), the comparison is sensible and sound. We deeply believe that the industry is at a point where it is imperative to guarantee that when talking about a critical parameter such like LWR, everyone speaks the same language, which is not currently the case.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
...- 795, and the American Petroleum Institute (API) 5L specifications and meeting the physical parameters... standard, line, or pressure pipe applications and meeting the physical parameters described below.... The scope of this review includes all seamless pipe meeting the physical parameters described above...
Design, Development and Analysis of Centrifugal Blower
NASA Astrophysics Data System (ADS)
Baloni, Beena Devendra; Channiwala, Salim Abbasbhai; Harsha, Sugnanam Naga Ramannath
2018-06-01
Centrifugal blowers are widely used turbomachines equipment in all kinds of modern and domestic life. Manufacturing of blowers seldom follow an optimum design solution for individual blower. Although centrifugal blowers are developed as highly efficient machines, design is still based on various empirical and semi empirical rules proposed by fan designers. There are different methodologies used to design the impeller and other components of blowers. The objective of present study is to study explicit design methodologies and tracing unified design to get better design point performance. This unified design methodology is based more on fundamental concepts and minimum assumptions. Parametric study is also carried out for the effect of design parameters on pressure ratio and their interdependency in the design. The code is developed based on a unified design using C programming. Numerical analysis is carried out to check the flow parameters inside the blower. Two blowers, one based on the present design and other on industrial design, are developed with a standard OEM blower manufacturing unit. A comparison of both designs is done based on experimental performance analysis as per IS standard. The results suggest better efficiency and more flow rate for the same pressure head in case of the present design compared with industrial one.
NASA Astrophysics Data System (ADS)
Giovinazzo, G.; Ribas, N.; Cinca, J.; Rosell-Ferrer, J.
2010-04-01
Previous studies have shown that it is possible to evaluate heart graft rejection level using a bioimpedance technique by means of an intracavitary catheter. However, this technique does not present relevant advantages compared to the gold standard for the detection of a heart rejection, which is the biopsy of the endomyocardial tissue. We propose to use a less invasive technique that consists in the use of a transoesophageal catheter and two standard ECG electrodes on the thorax. The aim of this work is to evaluate different parameters affecting the impedance measurement, including: sensitivity to electrical conductivity and permittivity of different organs in the thorax, lung edema and pleural water. From these results, we deduce the best estimator for cardiac rejection detection, and we obtain the tools to identify possible cases of false positive of heart rejection due to other factors. To achieve these objectives we have created a thoracic model and we have simulated, with a FEM program, different situations at the frequencies of 13, 30, 100, 300 and 1000 kHz. Our simulation demonstrates that the phase, at 100 and 300 kHz, has the higher sensitivity to changes in the electrical parameters of the heart muscle.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
45 CFR 153.110 - Standards for the State notice of benefit and payment parameters.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Standards for the State notice of benefit and payment parameters. 153.110 Section 153.110 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK...
45 CFR 153.110 - Standards for the State notice of benefit and payment parameters.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Standards for the State notice of benefit and payment parameters. 153.110 Section 153.110 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK...
45 CFR 153.110 - Standards for the State notice of benefit and payment parameters.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Standards for the State notice of benefit and payment parameters. 153.110 Section 153.110 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK...
Richards-like two species population dynamics model.
Ribeiro, Fabiano; Cabella, Brenno Caetano Troca; Martinez, Alexandre Souto
2014-12-01
The two-species population dynamics model is the simplest paradigm of inter- and intra-species interaction. Here, we present a generalized Lotka-Volterra model with intraspecific competition, which retrieves as particular cases, some well-known models. The generalization parameter is related to the species habitat dimensionality and their interaction range. Contrary to standard models, the species coupling parameters are general, not restricted to non-negative values. Therefore, they may represent different ecological regimes, which are derived from the asymptotic solution stability analysis and are represented in a phase diagram. In this diagram, we have identified a forbidden region in the mutualism regime, and a survival/extinction transition with dependence on initial conditions for the competition regime. Also, we shed light on two types of predation and competition: weak, if there are species coexistence, or strong, if at least one species is extinguished.
NASA Astrophysics Data System (ADS)
Wijaya, I. M. W.; Soedjono, E. S.
2018-03-01
Municipal wastewater is the main contributor to diverse water pollution problems. In order to prevent the pollution risks, wastewater have to be treated before discharged to the main water. Selection of appropriated treatment process need the characteristic information of wastewater as design consideration. This study aims to analyse the physicochemical characteristic of municipal wastewater from inlet and outlet of ABR unit around Surabaya City. Medokan Semampir and Genteng Candi Rejo has been selected as wastewater sampling point. The samples were analysed in laboratory with parameters, such as pH, TSS, COD, BOD, NH4 +, NO3 -, NO2 -, P, and detergent. The results showed that all parameters in both locations are under the national standard of discharged water quality. In other words, the treated water is securely discharged to the river
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Metal leaching in drinking water domestic distribution system: an Italian case study.
Sorlini, Sabrina; Gialdini, Francesca; Collivignarelli, Carlo
2014-01-01
The objective of this study was to evaluate metal contamination of tap water in seven public buildings in Brescia (Italy). Two monitoring periods were performed using three different sampling methods (overnight stagnation, 30-min stagnation, and random daytime). The results show that the water parameters exceeding the international standards (Directive 98/83/EC) at the tap were lead (max = 363 μg/L), nickel (max = 184 μg/L), zinc (max = 4900 μg/L), and iron (max = 393 μg/L). Compared to the total number of tap water samples analyzed (122), the values higher than limits of Directive 98/83/EC were 17% for lead, 11% for nickel, 14% for zinc, and 7% for iron. Three buildings exceeded iron standard while five buildings exceeded the standard for nickel, lead, and zinc. Moreover, there is no evident correlation between the leaching of contaminants in the domestic distribution system and the age of the pipes while a significant influence is shown by the sampling methods.
Felicetti, G; Avanza, F; Fiori, M; Brignoli, E; Rovescala, R
1996-01-01
The knee is a common site for injuries of the cartilage, capsule and ligament, which calls for the use of noninvasive techniques to assess injury severity properly and to plan adequate rehabilitation. Our study was aimed at comparing MR with isokinetic findings. To this purpose, 40 patients were examined; they were all affected with chondromalacia patellae, grades I-III, previously diagnosed at arthroscopy. Namely, 8 patients had grade I and 32 grades II and III chondromalacia. After MR and isokinetic exams, all patients were submitted to a standardized rehabilitation program. Our results indicate a marked decrease in quadriceps strength, especially in the most severe cases; in less severe cases, recovery was complete at 6 months, while the deficit remained in grades II and III injuries. MR yield was not relevant in 4 of 8 cases, while isokinetic findings were negative in one case. Both methods were positive in the most severe cases. At 6 months, both functional and MR findings were normal in grade I injuries, while some alterations remained in the others.
Symmetries and integrability of a fourth-order Euler-Bernoulli beam equation
NASA Astrophysics Data System (ADS)
Bokhari, Ashfaque H.; Mahomed, F. M.; Zaman, F. D.
2010-05-01
The complete symmetry group classification of the fourth-order Euler-Bernoulli ordinary differential equation, where the elastic modulus and the area moment of inertia are constants and the applied load is a function of the normal displacement, is obtained. We perform the Lie and Noether symmetry analysis of this problem. In the Lie analysis, the principal Lie algebra which is one dimensional extends in four cases, viz. the linear, exponential, general power law, and a negative fractional power law. It is further shown that two cases arise in the Noether classification with respect to the standard Lagrangian. That is, the linear case for which the Noether algebra dimension is one less than the Lie algebra dimension as well as the negative fractional power law. In the latter case the Noether algebra is three dimensional and is isomorphic to the Lie algebra which is sl(2,R). This exceptional case, although admitting the nonsolvable algebra sl(2,R), remarkably allows for a two-parameter family of exact solutions via the Noether integrals. The Lie reduction gives a second-order ordinary differential equation which has nonlocal symmetry.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
Hearing Aid–Related Standards and Test Systems
Ravn, Gert; Preves, David
2015-01-01
Many documents describe standardized methods and standard equipment requirements in the field of audiology and hearing aids. These standards will ensure a uniform level and a high quality of both the methods and equipment used in audiological work. The standards create the basis for measuring performance in a reproducible manner and independent from how and when and by whom parameters have been measured. This article explains, and focuses on, relevant acoustic and electromagnetic compatibility parameters and describes several test systems available. PMID:27516709
Application of the Hartmann-Tran profile to analysis of H2O spectra
NASA Astrophysics Data System (ADS)
Lisak, D.; Cygan, A.; Bermejo, D.; Domenech, J. L.; Hodges, J. T.; Tran, H.
2015-10-01
The Hartmann-Tran profile (HTP), which has been recently recommended as a new standard in spectroscopic databases, is used to analyze spectra of several lines of H2O diluted in N2, SF6, and in pure H2O. This profile accounts for various mechanisms affecting the line-shape and can be easily computed in terms of combinations of the complex Voigt profile. A multi-spectrum fitting procedure is implemented to simultaneously analyze spectra of H2O transitions acquired at different pressures. Multi-spectrum fitting of the HTP to a theoretical model confirms that this profile provides an accurate description of H2O line-shapes in terms of residuals and accuracy of fitted parameters. This profile and its limiting cases are also fit to measured spectra for three H2O lines in different vibrational bands. The results show that it is possible to obtain accurate HTP line-shape parameters when measured spectra have a sufficiently high signal-to-noise ratio and span a broad range of collisional-to-Doppler line widths. Systematic errors in the line area and differences in retrieved line-shape parameters caused by the overly simplistic line-shape models are quantified. Also limitations of the quadratic speed-dependence model used in the HTP are demonstrated in the case of an SF6 broadened H2O line, which leads to a strongly asymmetric line-shape.
Coupled Boltzmann computation of mixed axion neutralino dark matter in the SUSY DFSZ axion model
NASA Astrophysics Data System (ADS)
Bae, Kyu Jung; Baer, Howard; Lessa, Andre; Serce, Hasan
2014-10-01
The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion, saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale falesssim 6× 1012 GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then fa values up to ~ 1014 GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large fa~ 1015-1016 GeV where large entropy dilution from CO-produced saxions leads to allowed models.
NASA Astrophysics Data System (ADS)
Pérez Urquiza, M.; Acatzi Silva, A. I.
2014-02-01
Three certified reference materials produced from powdered seeds to measure the copy number ratio sequences of p35S/hmgA in maize containing MON 810 event, p35S/Le1 in soybeans containing GTS 40-3-2 event and DREB1A/acc1 in wheat were produced according to the ISO Guides 34 and 35. In this paper, we report digital polymerase chain reaction (dPCR) protocols, performance parameters and results of copy number ratio content of genetically modified organisms (GMOs) in these materials using two new dPCR systems to detect and quantify molecular deoxyribonucleic acid: the BioMark® (Fluidigm) and the OpenArray® (Life Technologies) systems. These technologies were implemented at the National Institute of Metrology in Mexico (CENAM) and in the Reference Center for GMO Detection from the Ministry of Agriculture (CNRDOGM), respectively. The main advantage of this technique against the more-used quantitative polymerase chain reaction (qPCR) is that it generates an absolute number of target molecules in the sample, without reference to standards or an endogenous control, which is very useful when not much information is available for new developments or there are no standard reference materials in the market as in the wheat case presented, or when it was not possible to test the purity of seeds as in the maize case presented here. Both systems reported enhanced productivity, increased reliability and reduced instrument footprint. In this paper, the performance parameters and uncertainty of measurement obtained with both systems are presented and compared.
Worst-case space radiation environments for geocentric missions
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Seltzer, S. M.
1976-01-01
Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.
The fate of the littlest Higgs model with T -parity under 13 TeV LHC data
NASA Astrophysics Data System (ADS)
Dercks, Daniel; Moortgat-Pick, Gudrid; Reuter, Jürgen; Shim, So Young
2018-05-01
We exploit all LHC available Run 2 data at center-of-mass energies of 8 and 13 TeV for searches for physics beyond the Standard Model. We scrutinize the allowed parameter space of Little Higgs models with the concrete symmetry of T -parity by providing comprehensive analyses of all relevant production channels of heavy vectors, top partners, heavy quarks and heavy leptons and all phenomenologically relevant decay channels. Constraints on the model, particularly the symmetry breaking scale f, will be derived from the signatures of jets and missing energy or leptons and missing energy. Besides the symmetric case, we also study the case of T-parity violation. Furthermore, we give an extrapolation to the LHC high-luminosity phase at 14 TeV as well.
Lattice model for water-solute mixtures.
Furlan, A P; Almarza, N G; Barbosa, M C
2016-10-14
A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Tsuboi, Kazuya; Yamamoto, Hiroshi; Somura, Fuji; Goto, Hiromi
2015-01-01
Enzyme replacement therapy (ERT) is the only approved therapy for Fabry disease. In June 2009, there was a worldwide shortage of agalsidase beta, necessitating dose reductions or switching to agalsidase alfa in some patients. We present two cases of Fabry disease (a parent and a child) who received agalsidase beta for 27 months at the licensed dose and 10 months at a reduced dose, followed by a switch to agalsidase alfa for 28 months. Case 1, a 26-year-old male had severe coughing and fatigue during ERT with agalsidase beta requiring antitussive and asthmatic drug therapy. After switching to agalsidase alfa, the coughing gradually resolved completely. Case 2, a 62-year-old female had advanced cardiac manifestations at the time of diagnosis. Despite receiving ERT with the approved dose of agalsidase beta, she experienced aggravation of congestive heart failure and was hospitalized. After switching to agalsidase alfa with standard care in heart disease, BNP level, echocardiographic parameters, eGFR rate and lyso-Gb3 levels were improved or stabilized. We report on two Fabry disease patients who experienced severe adverse events while on approved and/or reduced doses of agalsidase beta. Switching to agalsidase alfa associated with standard care in heart disease led to resolution or improvement in the cardiorespiratory status. And reduction in dose associated with standard care in respiratory disease was useful for decrease in cough and fatigue. Plasma BNP level was useful for monitoring heart failure and the effects of ERT.
Nationwide community survey of tuberculosis epidemiology in Saudi Arabia.
al-Kassimi, F A; Abdullah, A K; al-Hajjaj, M S; al-Orainey, I O; Bamgboye, E A; Chowdhury, M N
1993-08-01
In the first nationwide community-based survey of the epidemiology of tuberculosis in Saudi Arabia, 7721 subjects were screened in the 5 provinces (using an equal proportional allocation formula) for 2 parameters: (1) prevalence of positive Mantoux test in non BCG vaccinated subjects; (2) prevalence of bacillary cases on sputum culture. The prevalence of positive Mantoux reaction in children aged 5-14 years was 6% +/- 1.8; higher in urban areas (10%), and lower in rural areas (2%), thus classifying Saudi Arabia among the middle prevalence countries. These relatively good results (by Third World standards) could reflect the rise of the standard of living and wide availability of free treatment for active cases with a lowered risk of infection in the community. This view is supported by the fact that in our survey, only one subject grew Mycobacterium tuberculosis in the sputum. However, there were foci of high prevalence of Mantoux reaction in the urban communities in the Western province (20% +/- 8.7 urban; 1% +/- 1.9 rural). The problem may be caused by the fact that the province receives every year over a million pilgrims, some of whom are known to settle illegally and escape the usual screening for tuberculosis imposed on foreign labourers. In conclusion, even in the absence of an enforceable national programme for the eradication of tuberculosis, the economic standard and wide availability of free treatment for active cases has resulted in relatively low rates of prevalence of tuberculin sensitivity in children. The foci of high prevalence in the Western Province require special screening arrangements.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Using the Modification Index and Standardized Expected Parameter Change for Model Modification
ERIC Educational Resources Information Center
Whittaker, Tiffany A.
2012-01-01
Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…
Metal Standards for Waveguide Characterization of Materials
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Kory, Carol L.
2009-01-01
Rectangular-waveguide inserts that are made of non-ferromagnetic metals and are sized and shaped to function as notch filters have been conceived as reference standards for use in the rectangular- waveguide method of characterizing materials with respect to such constitutive electromagnetic properties as permittivity and permeability. Such standards are needed for determining the accuracy of measurements used in the method, as described below. In this method, a specimen of a material to be characterized is cut to a prescribed size and shape and inserted in a rectangular- waveguide test fixture, wherein the specimen is irradiated with a known source signal and detectors are used to measure the signals reflected by, and transmitted through, the specimen. Scattering parameters [also known as "S" parameters (S11, S12, S21, and S22)] are computed from ratios between the transmitted and reflected signals and the source signal. Then the permeability and permittivity of the specimen material are derived from the scattering parameters. Theoretically, the technique for calculating the permeability and permittivity from the scattering parameters is exact, but the accuracy of the results depends on the accuracy of the measurements from which the scattering parameters are obtained. To determine whether the measurements are accurate, it is necessary to perform comparable measurements on reference standards, which are essentially specimens that have known scattering parameters. To be most useful, reference standards should provide the full range of scattering-parameter values that can be obtained from material specimens. Specifically, measurements of the backscattering parameter (S11) from no reflection to total reflection and of the forward-transmission parameter (S21) from no transmission to total transmission are needed. A reference standard that functions as a notch (band-stop) filter can satisfy this need because as the signal frequency is varied across the frequency range for which the filter is designed, the scattering parameters vary over the ranges of values between the extremes of total reflection and total transmission. A notch-filter reference standard in the form of a rectangular-waveguide insert that has a size and shape similar to that of a material specimen is advantageous because the measurement configuration used for the reference standard can be the same as that for a material specimen. Typically a specimen is a block of material that fills a waveguide cross-section but occupies only a small fraction of the length of the waveguide. A reference standard of the present type (see figure) is a metal block that fills part of a waveguide cross section and contains a slot, the long dimension of which can be chosen to tailor the notch frequency to a desired value. The scattering parameters and notch frequency can be estimated with high accuracy by use of commercially available electromagnetic-field-simulating software. The block can be fabricated to the requisite precision by wire electrical-discharge machining. In use, the accuracy of measurements is determined by comparison of (1) the scattering parameters calculated from the measurements with (2) the scattering parameters calculated by the aforementioned software.
Thoracic Inlet Parameters for Degenerative Cervical Spondylolisthesis Imaging Measurement.
Wang, Quanbing; Wang, Xiao-Tao; Zhu, Lei; Wei, Yu-Xi
2018-04-05
BACKGROUND The aim of this study was to explore the diagnostic value of sagittal measurement of thoracic inlet parameters for degenerative cervical spondylolisthesis (DCS). MATERIAL AND METHODS We initially included 65 patients with DCS and the same number of health people as the control group by using cervical radiograph evaluations. We analyzed the x-ray and computer tomographic (CT) data in prone and standing position at the same time. Measurement of cervical sagittal parameters was carried out in a standardized supine position. Multivariate logistic regression analysis was performed to evaluate these parameters as a diagnostic index for DCS. RESULTS There were 60 cases enrolled in the DCS group, and 62 cases included in the control group. The T1 slope and thoracic inlet angle (TIA) were significantly greater for the DCS group compared to the control group (24.33±2.85º versus 19.59±2.04º, p=0.00; 76.11±9.82º versus 72.86±7.31º, p=0.03, respectively). We observed no significant difference for the results of the neck tilt (NT), C2-C7 angle in the control and the DSC group (p>0.05). Logistic regression analysis and receiver operating characteristic (ROC) curve revealed that preoperative T1 slope of more than 22.0º showed significantly diagnostic value for the DCS group (p<0.05). CONCLUSIONS Patients with preoperative sagittal imbalance of thoracic inlet have a statistically significant increased risk of DCS. T1 slope of more than 22.0º showed significantly diagnostic value for the incidence of DCS.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
QCD-Electroweak First-Order Phase Transition in a Supercooled Universe.
Iso, Satoshi; Serpico, Pasquale D; Shimada, Kengo
2017-10-06
If the electroweak sector of the standard model is described by classically conformal dynamics, the early Universe evolution can be substantially altered. It is already known that-contrarily to the standard model case-a first-order electroweak phase transition may occur. Here we show that, depending on the model parameters, a dramatically different scenario may happen: A first-order, six massless quark QCD phase transition occurs first, which then triggers the electroweak symmetry breaking. We derive the necessary conditions for this dynamics to occur, using the specific example of the classically conformal B-L model. In particular, relatively light weakly coupled particles are predicted, with implications for collider searches. This scenario is also potentially rich in cosmological consequences, such as renewed possibilities for electroweak baryogenesis, altered dark matter production, and gravitational wave production, as we briefly comment upon.
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Null but not void: considerations for hypothesis testing.
Shaw, Pamela A; Proschan, Michael A
2013-01-30
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.
14 CFR § 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2014 CFR
2014-01-01
... flights: (1) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Launch and orbit parameters for a standard launch. § 1214.117 Section § 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE...
An extension of the standard model with a single coupling parameter
NASA Astrophysics Data System (ADS)
Atance, Mario; Cortés, José Luis; Irastorza, Igor G.
1997-02-01
We show that it is possible to find an extension of the matter content of the standard model with a unification of gauge and Yukawa couplings reproducing their known values. The perturbative renormalizability of the model with a single coupling and the requirement to accommodate the known properties of the standard model fix the masses and couplings of the additional particles. The implications on the parameters of the standard model are discussed.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Wang, Zhi-Wei; Steele, T. G.; Hanif, T.; Mann, R. B.
2016-08-01
We consider a conformal complex singlet extension of the Standard Model with a Higgs portal interaction. The global U(1) symmetry of the complex singlet can be either broken or unbroken and we study each scenario. In the unbroken case, the global U(1) symmetry protects the complex singlet from decaying, leading to an ideal cold dark matter candidate with approximately 100 GeV mass along with a significant proportion of thermal relic dark matter abundance. In the broken case, we have developed a renormalization-scale optimization technique to significantly narrow the parameter space and in some situations, provide unique predictions for all the model's couplings and masses. We have found there exists a second Higgs boson with a mass of approximately 550 GeV that mixes with the known 125 GeV Higgs with a large mixing angle sin θ ≈ 0.47 consistent with current experimental limits. The imaginary part of the complex singlet in the broken case could provide axion dark matter for a wide range of models. Upon including interactions of the complex scalar with an additional vector-like fermion, we explore the possibility of a diphoton excess in both the unbroken and the broken cases. In the unbroken case, the model can provide a natural explanation for diphoton excess if extra terms are introduced providing extra contributions to the singlet mass. In the broken case, we find a set of coupling solutions that yield a second Higgs boson of mass 720 GeV and an 830 GeV extra vector-like fermion F , which is able to address the 750 GeV LHC diphoton excess. We also provide criteria to determine the symmetry breaking pattern in both the Higgs and hidden sectors.
Hoogendam, Jacob P; Zweemer, Ronald P; Hobbelink, Monique G G; van den Bosch, Maurice A A J; Verheijen, René H M; Veldhuis, Wouter B
2016-04-01
We aimed to explore the accuracy of (99m)Tc SPECT/MRI fusion for the selective assessment of nonenlarged sentinel lymph nodes (SLNs) for diagnosing metastases in early-stage cervical cancer patients. We consecutively included stage IA1-IIB1 cervical cancer patients who presented to our tertiary referral center between March 2011 and February 2015. Patients with enlarged lymph nodes (short axis ≥ 10 mm) on MRI were excluded. Patients underwent an SLN procedure with preoperative (99m)Tc-nanocolloid SPECT/CT-based SLN mapping. When fused datasets of the SPECT and MR images were created, SLNs could be identified on the MR image with accurate correlation to the histologic result of each individual SLN. An experienced radiologist, masked to histology, retrospectively reviewed all fused SPECT/MR images and scored morphologic SLN parameters on a standardized case report form. Logistic regression and receiver-operating curves were used to model the parameters against the SLN status. In 75 cases, 136 SLNs were eligible for analysis, of which 13 (9.6%) contained metastases (8 cases). Three parameters-short-axis diameter, long-axis diameter, and absence of sharp demarcation-significantly predicted metastatic invasion of nonenlarged SLNs, with quality-adjusted odds ratios of 1.42 (95% confidence interval [CI], 1.01-1.99), 1.28 (95% CI, 1.03-1.57), and 7.55 (95% CI, 1.09-52.28), respectively. The area under the curve of the receiver-operating curves combining these parameters was 0.749 (95% CI, 0.569-0.930). Heterogeneous gadolinium enhancement, cortical thickness, round shape, or SLN size, compared with the nearest non-SLN, showed no association with metastases (P= 0.055-0.795). In cervical cancer patients without enlarged lymph nodes, selective evaluation of only the SLNs-for size and absence of sharp demarcation-can be used to noninvasively assess the presence of metastases. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Dolejs, Josef; Marešová, Petra
2017-01-01
The answer to the question "At what age does aging begin?" is tightly related to the question "Where is the onset of mortality increase with age?" Age affects mortality rates from all diseases differently than it affects mortality rates from nonbiological causes. Mortality increase with age in adult populations has been modeled by many authors, and little attention has been given to mortality decrease with age after birth. Nonbiological causes are excluded, and the category "all diseases" is studied. It is analyzed in Denmark, Finland, Norway, and Sweden during the period 1994-2011, and all possible models are screened. Age trajectories of mortality are analyzed separately: before the age category where mortality reaches its minimal value and after the age category. Resulting age trajectories from all diseases showed a strong minimum, which was hidden in total mortality. The inverse proportion between mortality and age fitted in 54 of 58 cases before mortality minimum. The Gompertz model with two parameters fitted as mortality increased with age in 17 of 58 cases after mortality minimum, and the Gompertz model with a small positive quadratic term fitted data in the remaining 41 cases. The mean age where mortality reached minimal value was 8 (95% confidence interval 7.05-8.95) years. The figures depict an age where the human population has a minimal risk of death from biological causes. Inverse proportion and the Gompertz model fitted data on both sides of the mortality minimum, and three parameters determined the shape of the age-mortality trajectory. Life expectancy should be determined by the two standard Gompertz parameters and also by the single parameter in the model c/x. All-disease mortality represents an alternative tool to study the impact of age. All results are based on published data.
A model of objective weighting for EIA.
Ying, L G; Liu, Y C
1995-06-01
In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.
Muon g - 2 in the aligned two Higgs doublet model
Han, Tao; Kang, Sin Kyu; Sayre, Joshua
2016-02-16
In this paper, we study the Two-Higgs-Doublet Model with the aligned Yukawa sector (A2HDM) in light of the observed excess measured in the muon anomalous magnetic moment. We take into account the existing theoretical and experimental constraints with up-to-date values and demonstrate that a phenomenologically interesting region of parameter space exists. With a detailed parameter scan, we show a much larger region of viable parameter space in this model beyond the limiting case Type X 2HDM as obtained before. It features the existence of light scalar states with masses 3 GeV ≲ m H ≲ 50 GeV, or 10 GeVmore » ≲ m A ≲ 130 GeV, with enhanced couplings to tau leptons. The charged Higgs boson is typically heavier, with 200 GeV ≲ m H+ ≲ 630 GeV. The surviving parameter space is forced into the CP-conserving limit by EDM constraints. Some Standard Model observables may be significantly modified, including a possible new decay mode of the SMlike Higgs boson to four taus. Lastly, we comment on future measurements and direct searches for those effects at the LHC as tests of the model.« less
Spike shape analysis of electromyography for parkinsonian tremor evaluation.
Marusiak, Jarosław; Andrzejewska, Renata; Świercz, Dominika; Kisiel-Sajewicz, Katarzyna; Jaskólska, Anna; Jaskólski, Artur
2015-12-01
Standard electromyography (EMG) parameters have limited utility for evaluation of Parkinson disease (PD) tremor. Spike shape analysis (SSA) EMG parameters are more sensitive than standard EMG parameters for studying motor control mechanisms in healthy subjects. SSA of EMG has not been used to assess parkinsonian tremor. This study assessed the utility of SSA and standard time and frequency analysis for electromyographic evaluation of PD-related resting tremor. We analyzed 1-s periods of EMG recordings to detect nontremor and tremor signals in relaxed biceps brachii muscle of seven mild to moderate PD patients. SSA revealed higher mean spike amplitude, duration, and slope and lower mean spike frequency in tremor signals than in nontremor signals. Standard EMG parameters (root mean square, median, and mean frequency) did not show differences between the tremor and nontremor signals. SSA of EMG data is a sensitive method for parkinsonian tremor evaluation. © 2015 Wiley Periodicals, Inc.
The Impact of Setting the Standards of Health Promoting Hospitals on Hospital Indicators in Iran
Amiri, Mohammad; Khosravi, Ahmad; Riyahi, Leila
2016-01-01
Hospitals play a critical role in the health promotion of the society. This study aimed to determine the impact of establishing standards of health promoting hospitals on hospital indicators in Shahroud. This applied study was a quasi-experimental one which was conducted in 2013. Standards of health promoting hospitals were established as an intervention procedure in the Fatemiyeh hospital. Parameters of health promoting hospitals were compared in intervention and control hospitals before and after of intervention (6 months). The data were analyzed using chi-square and t-test. With the establishment of standards for health promotion hospitals, standard scores in intervention and control hospitals were found to be 72.26 ± 4.1 and 16.26 ± 7.5, respectively. T-test showed a significant difference between the mean scores of the hospitals under study (P = 0.001).The chi-square test also showed a significant relationship between patient satisfaction before and after the intervention so that patients’ satisfaction was higher after the intervention (P = 0.001). Commenting on the short-term or long-term positive impacts of establishing standards of health promoting hospitals on all hospital indicators is a bit difficult but preliminary results show the positive impact of the implementation of standards in case hospitals which has led to the improvement of many indicators in the hospital. PMID:27959930
Stieltjes, Bram; Weikert, Thomas; Gatidis, Sergios; Wiese, Mark; Wild, Damian; Lardinois, Didier
2017-01-01
The minimum apparent diffusion coefficient (ADCmin) derived from diffusion-weighted MRI (DW-MRI) and the maximum standardized uptake value (SUVmax) of FDG-PET are markers of aggressiveness in lung cancer. The numeric correlation of the two parameters has been extensively studied, but their spatial interplay is not well understood. After FDG-PET and DW-MRI coregistration, values and location of ADCmin- and SUVmax-voxels were analyzed. The upper limit of the 95% confidence interval for registration accuracy of sequential PET/MRI was 12 mm, and the mean distance (D) between ADCmin- and SUVmax-voxels was 14.0 mm (average of two readers). Spatial mismatch (D > 12 mm) between ADCmin and SUVmax was found in 9/25 patients. A considerable number of mismatch cases (65%) was also seen in a control group that underwent simultaneous PET/MRI. In the entire patient cohort, no statistically significant correlation between SUVmax and ADCmin was seen, while a moderate negative linear relationship (r = −0.5) between SUVmax and ADCmin was observed in tumors with a spatial match (D ≤ 12 mm). In conclusion, spatial mismatch between ADCmin and SUVmax is found in a considerable percentage of patients. The spatial connection of the two parameters SUVmax and ADCmin has a crucial influence on their numeric correlation. PMID:29391862
Information content in reflected signals during GPS Radio Occultation observations
NASA Astrophysics Data System (ADS)
Aparicio, Josep M.; Cardellach, Estel; Rodríguez, Hilda
2018-04-01
The possibility of extracting useful information about the state of the lower troposphere from the surface reflections that are often detected during GPS radio occultations (GPSRO) is explored. The clarity of the reflection is quantified, and can be related to properties of the surface and the low troposphere. The reflected signal is often clear enough to show good phase coherence, and can be tracked and processed as an extension of direct non-reflected GPSRO atmospheric profiles. A profile of bending angle vs. impact parameter can be obtained for these reflected signals, characterized by impact parameters that are below the apparent horizon, and that is a continuation at low altitude of the standard non-reflected bending angle profile. If there were no reflection, these would correspond to tangent altitudes below the local surface, and in particular below the local mean sea level. A forward operator is presented, for the evaluation of the bending angle of reflected GPSRO signals, given atmospheric properties as described by a numerical weather prediction system. The operator is an extension, at lower impact parameters, of standard bending angle operators, and reproduces both the direct and reflected sections of the measured profile. It can be applied to the assimilation of the reflected section of the profile as supplementary data to the direct section. Although the principle is also applicable over land, this paper is focused on ocean cases, where the topographic height of the reflecting surface, the sea level, is better known a priori.
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Particle Dark Matter constraints: the effect of Galactic uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito, Maria; Bernal, Nicolás; Iocco, Fabio
2017-02-01
Collider, space, and Earth based experiments are now able to probe several extensions of the Standard Model of particle physics which provide viable dark matter candidates. Direct and indirect dark matter searches rely on inputs of astrophysical nature, such as the local dark matter density or the shape of the dark matter density profile in the target in object. The determination of these quantities is highly affected by astrophysical uncertainties. The latter, especially those for our own Galaxy, are ill-known, and often not fully accounted for when analyzing the phenomenology of particle physics models. In this paper we present amore » systematic, quantitative estimate of how astrophysical uncertainties on Galactic quantities (such as the local galactocentric distance, circular velocity, or the morphology of the stellar disk and bulge) propagate to the determination of the phenomenology of particle physics models, thus eventually affecting the determination of new physics parameters. We present results in the context of two specific extensions of the Standard Model (the Singlet Scalar and the Inert Doublet) that we adopt as case studies for their simplicity in illustrating the magnitude and impact of such uncertainties on the parameter space of the particle physics model itself. Our findings point toward very relevant effects of current Galactic uncertainties on the determination of particle physics parameters, and urge a systematic estimate of such uncertainties in more complex scenarios, in order to achieve constraints on the determination of new physics that realistically include all known uncertainties.« less
Sauter, Alexander W; Stieltjes, Bram; Weikert, Thomas; Gatidis, Sergios; Wiese, Mark; Klarhöfer, Markus; Wild, Damian; Lardinois, Didier; Bremerich, Jens; Sommer, Gregor
2017-01-01
The minimum apparent diffusion coefficient (ADC min ) derived from diffusion-weighted MRI (DW-MRI) and the maximum standardized uptake value (SUV max ) of FDG-PET are markers of aggressiveness in lung cancer. The numeric correlation of the two parameters has been extensively studied, but their spatial interplay is not well understood. After FDG-PET and DW-MRI coregistration, values and location of ADC min - and SUV max -voxels were analyzed. The upper limit of the 95% confidence interval for registration accuracy of sequential PET/MRI was 12 mm, and the mean distance ( D ) between ADC min - and SUV max -voxels was 14.0 mm (average of two readers). Spatial mismatch ( D > 12 mm) between ADC min and SUV max was found in 9/25 patients. A considerable number of mismatch cases (65%) was also seen in a control group that underwent simultaneous PET/MRI. In the entire patient cohort, no statistically significant correlation between SUV max and ADC min was seen, while a moderate negative linear relationship ( r = -0.5) between SUV max and ADC min was observed in tumors with a spatial match ( D ≤ 12 mm). In conclusion, spatial mismatch between ADC min and SUV max is found in a considerable percentage of patients. The spatial connection of the two parameters SUV max and ADC min has a crucial influence on their numeric correlation.
Bernard, P-L; Amato, M; Degache, F; Edouard, P; Ramdani, S; Blain, H; Calmels, P; Codine, P
2012-05-01
Although peak torque has shown acceptable reproducibility, this may not be the case with two other often used parameters: time to peak torque (TPT) and the angle of peak torque (APT). Those two parameters should be used for the characterization of muscular adaptations in athletes. The isokinetic performance of the knee extensors and flexors in both limbs was measured in 29 male athletes. The experimental protocol consisted of three consecutive identical paradigms separated by 45 min breaks. Each test consisted of four maximal concentric efforts performed at 60 and 180°/s. Reproducibility was quantified by the standard error measurement (SEM), the coefficient of variation (CV) and by means of intra-class correlation coefficients (ICCs) with the calculation of 6 forms of ICCs. Using ICC as the indicator of reproducibility, the correlations for TPT of both limbs showed a range of 0.51-0.65 in extension and 0.50-0.63 in flexion. For APT, the values were 0.46-0.60 and 0.51-0.81, respectively. In addition, the calculated standard error of measurement (SEM) and CV scores confirmed the low level of absolute reproducibility. Due to their low reproducibility, neither TPT nor APT can serve as independent isokinetic parameters of knee flexor and extensor performance. So, given its reproducibility level, TPT and APT should not be used for the characterization of muscular adaptations in athletes. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Dirscherl, Thomas; Rickhey, Mark; Bogner, Ludwig
2012-02-01
A biologically adaptive radiation treatment method to maximize the TCP is shown. Functional imaging is used to acquire a heterogeneous dose prescription in terms of Dose Painting by Numbers and to create a patient-specific IMRT plan. Adapted from a method for selective dose escalation under the guidance of spatial biology distribution, a model, which translates heterogeneously distributed radiobiological parameters into voxelwise dose prescriptions, was developed. At the example of a prostate case with (18)F-choline PET imaging, different sets of reported values for the parameters were examined concerning their resulting range of dose values. Furthermore, the influence of each parameter of the linear-quadratic model was investigated. A correlation between PET signal and proliferation as well as cell density was assumed. Using our in-house treatment planning software Direct Monte Carlo Optimization (DMCO), a treatment plan based on the obtained dose prescription was generated. Gafchromic EBT films were irradiated for evaluation. When a TCP of 95% was aimed at, the maximal dose in a voxel of the prescription exceeded 100Gy for most considered parameter sets. One of the parameter sets resulted in a dose range of 87.1Gy to 99.3Gy, yielding a TCP of 94.7%, and was investigated more closely. The TCP of the plan decreased to 73.5% after optimization based on that prescription. The dose difference histogram of optimized and prescribed dose revealed a mean of -1.64Gy and a standard deviation of 4.02Gy. Film verification showed a reasonable agreement of planned and delivered dose. If the distribution of radiobiological parameters within a tumor is known, this model can be used to create a dose-painting by numbers plan which maximizes the TCP. It could be shown, that such a heterogeneous dose distribution is technically feasible. Copyright © 2012. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM
NASA Astrophysics Data System (ADS)
Porod, W.; Staub, F.
2012-11-01
We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the high scale parameters by evaluating the corresponding renormalisation group equations. These parameters must be consistent with the requirement of correct electroweak symmetry breaking. The second issue is to use the obtained masses and couplings for calculating decay widths and branching ratios of supersymmetric particles as well as the cross sections for these particles in electron-positron annihilation. The third issue is to calculate low energy constraints in the B-meson sector such as BR(b s), MB s, rare lepton decays, such as BR(e), the SUSY contributions to anomalous magnetic moments and electric dipole moments of leptons, the SUSY contributions to the ρ parameter as well as lepton flavour violating Z decays. Solution method: The renormalisation connecting a high scale and the electroweak scale is calculated by the Runge-Kutta method. Iteration provides a solution consistent with the multi-boundary conditions. In case of three-body decays and for the calculation of initial state radiation Gaussian quadrature is used for the numerical solution of the integrals. Reasons for new version: Inclusion of new models as well as additional observables. Moreover, a new standard for data transfer had been established, which is now supported. Summary of revisions: The already existing models have been extended to include also CP-violation and flavour mixing. The data transfer is done using the so-called SLHA2 standard. In addition new models have been included: all three types of seesaw models as well as bilinear R-parity violation. Moreover, additional observables are calculated: branching ratios for flavour violating lepton decays, EDMs of leptons and of the neutron, CP-violating mass difference in the B-meson sector and branching ratios for flavour violating b-quark decays. Restrictions: In case of R-parity violation the cross sections are not calculated. Running time: 0.2 seconds on an Intel(R) Core(TM)2 Duo CPU T9900 with 3.06 GHz
NASA Astrophysics Data System (ADS)
Zhang, Ying; Bi, Peng; Hiller, Janet
2008-01-01
This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.
Toor, Gurpal S; Han, Lu; Stanley, Craig D
2013-05-01
Our objective was to evaluate changes in water quality parameters during 1983-2007 in a subtropical drinking water reservoir (area: 7 km(2)) located in Lake Manatee Watershed (area: 338 km(2)) in Florida, USA. Most water quality parameters (color, turbidity, Secchi depth, pH, EC, dissolved oxygen, total alkalinity, cations, anions, and lead) were below the Florida potable water standards. Concentrations of copper exceeded the potable water standard of <30 μg l(-1) in about half of the samples. About 75 % of total N in lake was organic N (0.93 mg l(-1)) with the remainder (25 %) as inorganic N (NH3-N: 0.19, NO3-N: 0.17 mg l(-1)), while 86 % of total P was orthophosphate. Mean total N/P was <6:1 indicating N limitation in the lake. Mean monthly concentration of chlorophyll-a was much lower than the EPA water quality threshold of 20 μg l(-1). Concentrations of total N showed significant increase from 1983 to 1994 and a decrease from 1997 to 2007. Total P showed significant increase during 1983-2007. Mean concentrations of total N (n = 215; 1.24 mg l(-1)) were lower, and total P (n = 286; 0.26 mg l(-1)) was much higher than the EPA numeric criteria of 1.27 mg total N l(-1) and 0.05 mg total P l(-1) for Florida's colored lakes, respectively. Seasonal trends were observed for many water quality parameters where concentrations were typically elevated during wet months (June-September). Results suggest that reducing transport of organic N may be one potential option to protect water quality in this drinking water reservoir.
Characteristics of nocturnal coastal boundary layer in Ahtopol based on averaged SODAR profiles
NASA Astrophysics Data System (ADS)
Barantiev, Damyan; Batchvarova, Ekaterina; Novitzky, Mikhail
2014-05-01
The ground-based remote sensing instruments allow studying the wind regime and the turbulent characteristics of the atmosphere with height, achieving new knowledge and solving practical problems, such as air quality assessments, mesoscale models evaluation with high resolution data, characterization of the exchange processes between the surface and the atmosphere, the climate comfort conditions and the risk for extreme events, etc. Very important parameter in such studies is the height of the atmospheric boundary layer. Acoustic remote sensing data of the coastal atmospheric boundary layer were explored based on over 4-years continuous measurements at the meteorological observatory of Ahtopol (Bulgarian Southern Black Sea Coast) under Bulgarian - Russian scientific agreement. Profiles of 12 parameters from a mid-range acoustic sounding instrument type SCINTEC MFAS are derived and averaged up to about 600 m according filtering based on wind direction (land or sea type of night fowls). From the whole investigated period of 1454 days with 10-minute resolution SODAR data 2296 profiles represented night marine air masses and 1975 profiles represented the night flow from land during the months May to September. Graphics of averaged profiles of 12 SODAR output parameters with different availability of data in height are analyzed for both cases. A marine boundary-layer height of about 300 m is identified in the profiles of standard deviation of vertical wind speed (σw), Turbulent Kinetic Energy (TKE) and eddy dissipation rate (EDR). A nocturnal boundary-layer height of about 420 m was identified from the profiles of the same parameters under flows from land condition. In addition, the Buoyancy Production (BP= σw3/z) profiles were calculated from the standard deviation of the vertical wind speed and the height z above ground.
NASA Astrophysics Data System (ADS)
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S; Gardner, S; Doemer, A
Purpose: Investigate use of standardized non-coplanar arcs to improve plan quality in lung Stereotactic Body Radiation Therapy(SBRT) VMAT planning. Methods: VMAT planning was performed for 9 patients previously treated with SBRT for peripheral lung tumors (tumor size:12.7cc to 32.5cc). For each patient, 7 VMAT plans (couch rotation values:0,5,10,15,20,25,and 30 deg) were generated; the coplanar plans were pushed to meet the RTOG0915 constraints and each non-coplanar plans utilized the same optimization constraints. The following plan dose metrics were used (taken from RTOG 0915): D-2cm: the maximum dose at 2 cm from the PTV, conformality index (CI), gradient index (GI), lung volumemore » receiving 5 Gy (V5) and lung volume receiving 20 Gy (V20). The couch collision clearance was checked for each plan through a dry run using the couch position from the patient’s treatment. Results: Of the 9 cases, one coplanar plan failed to meet two protocol guidelines (both gradient index and D-2cm parameter), and an additional plan failed the D-2cm parameter. When introducing at least 5 degree couch rotation, all plans met the protocol guidelines. The largest feasible couch angle available was 15 to 20 degrees due to gantry collision issues. Non-coplanar plans resulted in the average (standard deviation) reduction of the following metrics: GI by 7.3% (3.7%); lung V20 by 11.1% (3.2%); D-2cm by 12.7% (3.9%). The CI was unchanged (−0.3%±0.6%), and lung V5 increased (3.8%±8.2%). Conclusion: The use of couch rotations as little as 5 degrees allows for plan quality that will meet RTOG0915 constraints while reducing D-2cm, GI, and lung V20. Using default couch rotations while planning SBRT cases will allow for more efficient planning with the stated goal of meeting RTOG0915 criteria for all clinical cases. Gantry clearance checks in the treatment room may be necessary to ensure safe treatments for larger couch rotation values.« less
NASA Astrophysics Data System (ADS)
Djemil, Wafa; Hannouche, Mani; Belksier, Mohamed Salah
2018-05-01
The region of ourstudy has two treatment plants; respectively West and South the Beni Messous and Baraki polluted water treatment plant `PWTP'. Which provide a comprehensive treatment of waste water in the region. The aim of ourworkis to highlight the possibility of reusing the treated waste water from the two Waste water Treatment Plant 'WWTPs' in agriculture. This has been achieved by a comparative study of physicochemical parameters with the WHO and FAO standards recommended for irrigation. Apart from the WWTP Baraki's values of N-NH4, BOD5, COD and Total Chromium for long-term irrigation. Which exceed the standards all other parameters fall in the recommended standards. So It was concluded that the treated waste water from the Beni Messous WWTP isbetter for irrigation than Baraki's. Thus we concluded that the treated waste water from the Beni Messous WWTP is more beneficial for irrigation than Baraki's. The contents of the heavy metals Cr, Pb and Cd recorded in the twotreatment plants do not constitute a danger for the environment. The waste water undergoes different stages of treatment to becomepurified water receivable by the natural environment without environmental impact and to satisfy the strictest ecological constraint. Given the needs and the deficit of the water resources in Algeria. The climatic context, the increasing urbanization and the water stress, some recommendations have been formulated to improve the environmental impact.
77 FR 54384 - Nonconformance Penalties for On-Highway Heavy-Duty Diesel Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
...EPA is taking final action to establish nonconformance penalties (NCPs) for manufacturers of heavy heavy-duty diesel engines (HHDDE) in model years 2012 and later for emissions of oxides of nitrogen (NOX) because we have found the criteria for NCPs and the Clean Air Act have been met. The NOX standards to which these NCPs apply were established by a rule published on January 18, 2001. In general, NCPs allow a manufacturer of heavy-duty engines (HDEs) whose engines do not conform to applicable emission standards, but do not exceed a designated upper limit, to be issued a certificate of conformity upon payment of a monetary penalty to the United States Government. The upper limit associated with these NCPs is 0.50 grams of NOX per brake horsepower-hour (g/bhp-hr). This Final Rule specifies certain parameters that are entered into the preexisting penalty formulas along with the emissions of the engine and the incorporation of other factors to determine the amount a manufacturer must pay. Key parameters that determine the NCP a manufacturer must pay are EPA's estimated cost of compliance for a near worst-case engine and the degree to which the engine exceeds the emission standard (as measured from production engines). EPA proposed NCPs for medium heavy duty diesel engines. However, EPA is not taking final action with regard to NCPs for these engines at this time because EPA has not completed its review of the data and comments regarding these engines.
2013-01-01
Background Melanoides tuberculatus (Müller, 1774) (Thiaridae), an introduced gastropod mollusc with a wide geographical distribution in the Neotropics, is the intermediate host of the trematode Centrocestus formosanus (Nishigori, 1924) (Heterophyidae). This parasite is considered to be pathogenic to humans. The aim of the present work was to evaluate the locomotory activity of uninfected M. tuberculatus compared with those naturally infected with C. formosanus. Findings The locomotory activity of each mollusc was recorded using an image analysis biomonitoring system, Videomex-V ®, to evaluate and quantify the parameters of ‘Stereotypic’ and ‘Resting time’. The Generalized Estimating Equation analysis of locomotory activity of M. tuberculatus infected with C. formosanus revealed significant differences compared with uninfected molluscs for the parameters ‘Stereotypic time’ and ‘Resting time’ with a reduction of movement. The variations in the values of the monitoring intervals recorded showed a significant difference for the infected molluscs in the case of Stereotypic time, with an irregular locomotory activity pattern, as compared to that of uninfected molluscs. The analysis of the standard length of all molluscs did not exhibit any correlation with locomotory activity, showing that C. formosanus is able to alter the locomotory activity of its snail host regardless of the standard length. Conclusions The trematode C. formosanus affects the locomotory activity of the mollusc M. tuberculatus by reducing its movement and causing it to exhibit an irregular pattern of activity, both of which are independent of the snail's standard length. PMID:23574763
Study of Material Used in Nanotechnology for the Recycling of Industrial Waste Water
NASA Astrophysics Data System (ADS)
Larbi, L.; Fertikh, N.; Toubal, A.
The objective of our study is to recycle the industrial waste water of a industrial Complex after treatment by the bioprocess MBR (membrane bioreactor). In order to apply this bioprocess, the water quality in question was first of all studied. To characterize this industrial waste water, a series of physicochemical analysis was carried out according to standardized directives and methods. Following-up the water quality to meet the regulatory requirements with rejection of this industrial waste water, a study was done thanks to the permanently monitoring of the following relevant parameters(P): the flow, the potential of hydrogen (pH), the total suspended solids(TSS), the turbidity (Turb), the chemical oxygen demand (COD),the biochemical oxygen demand (BOD), the Kjeldahl total nitrogen (KTN) and ammonia (NH4+), the total phosphorus (Ptot), the fluorine (F), the oils (O), the fats (F) and the phenols (Ph). According to collected information, it was established the sampling rates to which the quality control was done, the selected analytical methods were validated by the control charts and the analysis test number was determined by the Cochran test. The results of the quality control show that some rejected water contents are not in the Algerian standards, but, in our case, the objective is the preoccupation for a standard setting of these industrial water parameters so as to recycle it. The process adopted by MBR for waste water treatment is being studied, first in the development of the experimental characterizing of the reactor and the selected membrane.
Cytological Evaluation of Thyroid Lesions by Nuclear Morphology and Nuclear Morphometry.
Yashaswini, R; Suresh, T N; Sagayaraj, A
2017-01-01
Fine needle aspiration (FNA) of the thyroid gland is an effective diagnostic method. The Bethesda system for reporting thyroid cytopathology classifies them into six categories and gives implied risk for malignancy and management protocol in each category. Though the system gives specific criteria, diagnostic dilemma still exists. Using nuclear morphometry, we can quantify the number of parameters, such as those related to nuclear size and shape. The evaluation of nuclear morphometry is not well established in thyroid cytology. To classify thyroid lesions on fine needle aspiration cytology (FNAC) using Bethesda system and to evaluate the significance of nuclear parameters in improving the prediction of thyroid malignancy. In the present study, 120 FNAC cases of thyroid lesions with histological diagnosis were included. Computerized nuclear morphometry was done on 81 cases which had confirmed cytohistological correlation, using Aperio computer software. One hundred nuclei from each case were outlined and eight nuclear parameters were analyzed. In the present study, thyroid lesions were common in female with M: F ratio of 1:5 and most commonly in 40-60 yrs. Under Bethesda system, 73 (60.83%) were category II; 14 (11.6%) were category III, 3 (2.5%) were category IV, 8 (6.6%) were category V, and 22 (18.3%) were category VI, which were malignant on histopathological correlation. Sensitivity, specificity, and diagnostic accuracy of Bethesda reporting system are 62.5, 84.38, and 74.16%, respectively. Minimal nuclear diameter, maximal nuclear diameter, nuclear perimeter, and nuclear area were higher in malignant group compared to nonneoplastic and benign group. The Bethesda system is a useful standardized system of reporting thyroid cytopathology. It gives implied risk of malignancy. Nuclear morphometry by computerized image analysis can be utilized as an additional diagnostic tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Micro- and macromechanics of fracture of structural elements
NASA Astrophysics Data System (ADS)
Zavoychinskaya, E. B.
2012-05-01
A mathematical model for the description of bulk microfracture processes in metals, which are understood as nucleation and coalescence of submicrocracks, microcracks, and short nonpropagating microcracks, and of brittle macrofracture processes in metals is presented. This model takes into account the laws of formation and propagation of short propagating, medium, and significant microcracks. The basic notions of this model are the reduced length of cracks and the probability of micro- and macrofracture. The model is based on the mechanical parameters of metal strength and fracture, which are studied experimentally. The expressions for determining the probability in the case of one-dimensional symmetric loading are given. The formulas for determining the threshold number of cycles at the beginning of crack formation are obtained for cracks of each type. For the first time, the data on standard parameters of fatigue strength were used to construct the fatigue curve of metals and alloys for macrocracks.
X-Ray diffraction on large single crystals using a powder diffractometer
Jesche, A.; Fix, M.; Kreyssig, A.; ...
2016-06-16
Information on the lattice parameter of single crystals with known crystallographic structure allows for estimations of sample quality and composition. In many cases it is sufficient to determine one lattice parameter or the lattice spacing along a certain, high- symmetry direction, e.g. in order to determine the composition in a substitution series by taking advantage of Vegard’s rule. Here we present a guide to accurate measurements of single crystals with dimensions ranging from 200 μm up to several millimeter using a standard powder diffractometer in Bragg-Brentano geometry. The correction of the error introduced by the sample height and the optimizationmore » of the alignment are discussed in detail. Finally, in particular for single crystals with a plate-like habit, the described procedure allows for measurement of the lattice spacings normal to the plates with high accuracy on a timescale of minutes.« less
The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design
NASA Astrophysics Data System (ADS)
Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas
2011-03-01
The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.
Multiple regression for physiological data analysis: the problem of multicollinearity.
Slinker, B K; Glantz, S A
1985-07-01
Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
Search for neutral MSSM Higgs bosons at LEP
NASA Astrophysics Data System (ADS)
Schael, S.; Barate, R.; Brunelière, R.; de Bonis, I.; Decamp, D.; Goy, C.; Jézéquel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Trocmé, B.; Bravo, S.; Casado, M. P.; Chmeissani, M.; Crespo, J. M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, L.; Martinez, M.; Pacheco, A.; Ruiz, H.; Colaleo, A.; Creanza, D.; de Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Barklow, T.; Buchmüller, O.; Cattaneo, M.; Clerbaux, B.; Drevermann, H.; Forty, R. W.; Frank, M.; Gianotti, F.; Hansen, J. B.; Harvey, J.; Hutchcroft, D. E.; Janot, P.; Jost, B.; Kado, M.; Mato, P.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Teubert, F.; Valassi, A.; Videau, I.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Jousset, J.; Michel, B.; Monteil, S.; Pallin, D.; Pascolo, J. M.; Perret, P.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Kraan, A. C.; Nilsson, B. S.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rougé, A.; Videau, H.; Ciulli, V.; Focardi, E.; Parrini, G.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bossi, F.; Capon, G.; Cerutti, F.; Chiarella, V.; Mannocchi, G.; Laurelli, P.; Mannocchi, G.; Murtas, G. P.; Passalacqua, L.; Kennedy, J.; Lynch, J. G.; Negus, P.; O'Shea, V.; Thompson, A. S.; Wasserbaech, S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Putzer, A.; Stenzel, H.; Tittel, K.; Wunsch, M.; Beuselinck, R.; Cameron, W.; Davies, G.; Dornan, P. J.; Girone, M.; Marinelli, N.; Nowell, J.; Rutherford, S. A.; Sedgbeer, J. K.; Thompson, J. C.; White, R.; Ghete, V. M.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C. K.; Clarke, D. P.; Ellis, G.; Finch, A. J.; Foster, F.; Hughes, G.; Jones, R. W. L.; Pearson, M. R.; Robertson, N. A.; Smizanska, M.; van der Aa, O.; Delaere, C.; Leibenguth, G.; Lemaitre, V.; Blumenschein, U.; Hölldorfer, F.; Jakobs, K.; Kayser, F.; Müller, A.-S.; Renk, B.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Payre, P.; Tilquin, A.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Hüttmann, K.; Lütjens, G.; Männer, W.; Moser, H.-G.; Settles, R.; Villegas, M.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, P.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Foà, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Sanguinetti, G.; Sciabà, A.; Sguazzoni, G.; Spagnolo, P.; Tenchini, R.; Venturi, A.; Verdini, P. G.; Awunor, O.; Blair, G. A.; Cowan, G.; Garcia-Bellido, A.; Green, M. G.; Medcalf, T.; Misiejuk, A.; Strong, J. A.; Teixeira-Dias, P.; Clifft, R. W.; Edgecock, T. R.; Norton, P. R.; Tomalin, I. R.; Ward, J. J.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lançon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Tuchming, B.; Vallage, B.; Litke, A. M.; Taylor, G.; Booth, C. N.; Cartwright, S.; Combley, F.; Hodgson, P. N.; Lehto, M.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Ngac, A.; Prange, G.; Borean, C.; Giannini, G.; He, H.; Putz, J.; Rothberg, J.; Armstrong, S. R.; Berkelman, K.; Cranmer, K.; Ferguson, D. P. S.; Gao, Y.; González, S.; Hayes, O. J.; Hu, H.; Jin, S.; Kile, J.; McNamara, P. A., III; Nielsen, J.; Pan, Y. B.; von Wimmersperg-Toeller, J. H.; Wiedenmann, W.; Wu, J.; Wu, S. L.; Wu, X.; Zobernig, G.; Dissertori, G.; Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alderweireld, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Berntzon, L.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, P.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; Dalmau, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, P.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Herr, H.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Hultqvist, K.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Johansson, P. D.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, T. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Rames, J.; Read, A.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Segar, A.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Taffard, A. C.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; Achard, P.; Adriani, O.; Aguilar-Benitez, M.; Alcaraz, J.; Alemanni, G.; Allaby, J.; Aloisio, A.; Alviggi, M. G.; Anderhub, H.; Andreev, V. P.; Anselmo, F.; Arefiev, A.; Azemoon, T.; Aziz, T.; Bagnaia, P.; Bajo, A.; Baksay, G.; Baksay, L.; Baldew, S. V.; Banerjee, S.; Banerjee, Sw.; Barczyk, A.; Barillère, R.; Bartalini, P.; Basile, M.; Batalova, N.; Battiston, R.; Bay, A.; Becattini, F.; Becker, U.; Behner, F.; Bellucci, L.; Berbeco, R.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biglietti, M.; Biland, A.; Blaising, J. J.; Blyth, S. C.; Bobbink, G. J.; Böhm, A.; Boldizsar, L.; Borgia, B.; Bottai, S.; Bourilkov, D.; Bourquin, M.; Braccini, S.; Branson, J. G.; Brochu, F.; Burger, J. D.; Burger, W. J.; Cai, X. D.; Capell, M.; Cara Romeo, G.; Carlino, G.; Cartacci, A.; Casaus, J.; Cavallari, F.; Cavallo, N.; Cecchi, C.; Cerrada, M.; Chamizo, M.; Chang, Y. H.; Chemarin, M.; Chen, A.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chiefari, G.; Cifarelli, L.; Cindolo, F.; Clare, I.; Clare, R.; Coignet, G.; Colino, N.; Costantini, S.; de La Cruz, B.; Cucciarelli, S.; de Asmundis, R.; Déglon, P.; Debreczeni, J.; Degré, A.; Dehmelt, K.; Deiters, K.; Della Volpe, D.; Delmeire, E.; Denes, P.; Denotaristefani, F.; de Salvo, A.; Diemoz, M.; Dierckxsens, M.; Dionisi, C.; Dittmar, M.; Doria, A.; Dova, M. T.; Duchesneau, D.; Duda, M.; Echenard, B.; Eline, A.; El Hage, A.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Extermann, P.; Falagan, M. A.; Falciano, S.; Favara, A.; Fay, J.; Fedin, O.; Felcini, M.; Ferguson, T.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Fisher, W.; Forconi, G.; Freudenreich, K.; Furetta, C.; Galaktionov, Yu.; Ganguli, S. N.; Garcia-Abia, P.; Gataullin, M.; Gentile, S.; Giagu, S.; Gong, Z. F.; Grenier, G.; Grimm, O.; Gruenewald, M. W.; Guida, M.; Gupta, V. K.; Gurtu, A.; Gutay, L. J.; Haas, D.; Hatzifotiadou, D.; Hebbeker, T.; Hervé, A.; Hirschfelder, J.; Hofer, H.; Hohlmann, M.; Holzner, G.; Hou, S. R.; Hu, J.; Jin, B. N.; Jindal, P.; Jones, L. W.; de Jong, P.; Josa-Mutuberría, I.; Kaur, M.; Kienzle-Focacci, M. N.; Kim, J. K.; Kirkby, J.; Kittel, W.; Klimentov, A.; König, A. C.; Kopal, M.; Koutsenko, V.; Kräber, M.; Kraemer, R. W.; Krüger, A.; Kunin, A.; Ladron de Guevara, P.; Laktineh, I.; Landi, G.; Lebeau, M.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Le Goff, J. M.; Leiste, R.; Levtchenko, M.; Levtchenko, P.; Li, C.; Likhoded, S.; Lin, C. H.; Lin, W. T.; Linde, F. L.; Lista, L.; Liu, Z. A.; Lohmann, W.; Longo, E.; Lu, Y. S.; Luci, C.; Luminari, L.; Lustermann, W.; Ma, W. G.; Malgeri, L.; Malinin, A.; Ma Na, C.; Mans, J.; Martin, J. P.; Marzano, F.; Mazumdar, K.; McNeil, R. R.; Mele, S.; Merola, L.; Meschini, M.; Metzger, W. J.; Mihul, A.; Milcent, H.; Mirabelli, G.; Mnich, J.; Mohanty, G. B.; Muanza, G. S.; Muijs, A. J. M.; Musicar, B.; Musy, M.; Nagy, S.; Natale, S.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Nisati, A.; Novak, T.; Nowak, H.; Ofierzynski, R.; Organtini, G.; Pal, I.; Palomares, C.; Paolucci, P.; Paramatti, R.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pedace, M.; Pensotti, S.; Perret-Gallix, D.; Piccolo, D.; Pierella, F.; Pieri, M.; Pioppi, M.; Piroué, P. A.; Pistolesi, E.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Pothier, J.; Prokofiev, D.; Rahal-Callot, G.; Rahaman, M. A.; Raics, P.; Raja, N.; Ramelli, R.; Rancoita, P. G.; Ranieri, R.; Raspereza, A.; Razis, P.; Rembeczki, S.; Ren, D.; Rescigno, M.; Reucroft, S.; Riemann, S.; Riles, K.; Roe, B. P.; Romero, L.; Rosca, A.; Rosemann, C.; Rosenbleck, C.; Rosier-Lees, S.; Roth, S.; Rubio, J. A.; Ruggiero, G.; Rykaczewski, H.; Sakharov, A.; Saremi, S.; Sarkar, S.; Salicio, J.; Sanchez, E.; Schäfer, C.; Schegelsky, V.; Schopper, H.; Schotanus, D. J.; Sciacca, C.; Servoli, L.; Shevchenko, S.; Shivarov, N.; Shoutko, V.; Shumilov, E.; Shvorob, A.; Son, D.; Souga, C.; Spillantini, P.; Steuer, M.; Stickland, D. P.; Stoyanov, B.; Straessner, A.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Sushkov, S.; Suter, H.; Swain, J. D.; Szillasi, Z.; Tang, X. W.; Tarjan, P.; Tauscher, L.; Taylor, L.; Tellili, B.; Teyssier, D.; Timmermans, C.; Ting, S. C. C.; Ting, S. M.; Tonwar, S. C.; Tóth, J.; Tully, C.; Tung, K. L.; Ulbricht, J.; Valente, E.; van de Walle, R. T.; Vasquez, R.; Vesztergombi, G.; Vetlitsky, I.; Viertel, G.; Vivargent, M.; Vlachos, S.; Vodopianov, I.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Wadhwa, M.; Wang, Q.; Wang, X. L.; Wang, Z. M.; Weber, M.; Wynhoff, S.; Xia, L.; Xu, Z. Z.; Yamamoto, J.; Yang, B. Z.; Yang, C. G.; Yang, H. J.; Yang, M.; Yeh, S. C.; Zalite, An.; Zalite, Yu.; Zhang, Z. P.; Zhao, J.; Zhu, G. Y.; Zhu, R. Y.; Zhuang, H. L.; Zichichi, A.; Zimmermann, B.; Zöller, M.; Abbiendi, G.; Ainsley, C.; Åkesson, P. F.; Alexander, G.; Allison, J.; Amaral, P.; Anagnostou, G.; Anderson, K. J.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barillari, T.; Barlow, R. J.; Batley, R. J.; Bechtle, P.; Behnke, T.; Bell, K. W.; Bell, P. J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, R. M.; Buesser, K.; Burckhart, H. J.; Campana, S.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Ciocca, C.; Csilling, A.; Cuffiani, M.; Dado, S.; de Jong, S.; de Roeck, A.; de Wolf, E. A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I. P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Gagnon, P.; Gary, J. W.; Gascon-Shotkin, S. M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, M.; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwé, M.; Günther, P. O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G. G.; Harel, A.; Hauschild, M.; Hawkes, C. M.; Hawkings, R.; Hemingway, R. J.; Herten, G.; Heuer, R. D.; Hill, J. C.; Hoffman, K.; Horváth, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jost, U.; Jovanovic, P.; Junk, T. R.; Kanaya, N.; Kanzaki, J.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R. K.; Kellogg, R. G.; Kennedy, B. W.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Krämer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G. D.; Landsman, H.; Lanske, D.; Layter, J. G.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S. L.; Loebinger, F. K.; Lu, J.; Ludwig, A.; Ludwig, J.; Mader, W.; Marcellini, S.; Martin, A. J.; Masetti, G.; Mashimo, T.; Mättig, P.; McKenna, J.; McPherson, R. A.; Meijers, F.; Menges, W.; Merritt, F. S.; Mes, H.; Meyer, N.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D. J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H. A.; Nisius, R.; O'Neale, S. W.; Oh, A.; Oreglia, M. J.; Orito, S.; Pahl, C.; Pásztor, G.; Pater, J. R.; Pilcher, J. E.; Pinfold, J.; Plane, D. E.; Poli, B.; Pooth, O.; Przybycień, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Roney, J. M.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E. K. G.; Schaile, A. D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schörner-Sadenius, T.; Schröder, M.; Schumacher, M.; Scott, W. G.; Seuster, R.; Shears, T. G.; Shen, B. C.; Sherwood, P.; Skuja, A.; Smith, A. M.; Sobie, R.; Söldner-Rembold, S.; Spano, F.; Stahl, A.; Strom, D.; Ströhmer, R.; Tarem, S.; Tasevsky, M.; Teuscher, R.; Thomson, M. A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trócsányi, Z.; Tsur, E.; Turner-Watson, M. F.; Ueda, I.; Ujvári, B.; Vollmer, C. F.; Vannerem, P.; Vértesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Wells, P. S.; Wengler, T.; Wermes, N.; Wilson, G. W.; Wilson, J. A.; Wolf, G.; Wyatt, T. R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, L.; Heinemeyer, S.; Pilaftsis, A.; Weiglein, G.
2006-09-01
The four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have searched for the neutral Higgs bosons which are predicted by the Minimal Supersymmetric standard model (MSSM). The data of the four collaborations are statistically combined and examined for their consistency with the background hypothesis and with a possible Higgs boson signal. The combined LEP data show no significant excess of events which would indicate the production of Higgs bosons. The search results are used to set upper bounds on the cross-sections of various Higgs-like event topologies. The results are interpreted within the MSSM in a number of “benchmark” models, including CP-conserving and CP-violating scenarios. These interpretations lead in all cases to large exclusions in the MSSM parameter space. Absolute limits are set on the parameter cosβ and, in some scenarios, on the masses of neutral Higgs bosons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Quantitative photoacoustic imaging in the acoustic regime using SPIM
NASA Astrophysics Data System (ADS)
Beigl, Alexander; Elbau, Peter; Sadiq, Kamran; Scherzer, Otmar
2018-05-01
While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to the recovery of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows us to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.
Farsalinos, Konstantinos E; Daraban, Ana M; Ünlü, Serkan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe
2015-10-01
This study was planned by the EACVI/ASE/Industry Task Force to Standardize Deformation Imaging to (1) test the variability of speckle-tracking global longitudinal strain (GLS) measurements among different vendors and (2) compare GLS measurement variability with conventional echocardiographic parameters. Sixty-two volunteers were studied using ultrasound systems from seven manufacturers. Each volunteer was examined by the same sonographer on all machines. Inter- and intraobserver variability was determined in a true test-retest setting. Conventional echocardiographic parameters were acquired for comparison. Using the software packages of the respective manufacturer and of two software-only vendors, endocardial GLS was measured because it was the only GLS parameter that could be provided by all manufactures. We compared GLSAV (the average from the three apical views) and GLS4CH (measured in the four-chamber view) measurements among vendors and with the conventional echocardiographic parameters. Absolute values of GLSAV ranged from 18.0% to 21.5%, while GLS4CH ranged from 17.9% to 21.4%. The absolute difference between vendors for GLSAV was up to 3.7% strain units (P < .001). The interobserver relative mean errors were 5.4% to 8.6% for GLSAV and 6.2% to 11.0% for GLS4CH, while the intraobserver relative mean errors were 4.9% to 7.3% and 7.2% to 11.3%, respectively. These errors were lower than for left ventricular ejection fraction and most other conventional echocardiographic parameters. Reproducibility of GLS measurements was good and in many cases superior to conventional echocardiographic measurements. The small but statistically significant variation among vendors should be considered in performing serial studies and reflects a reference point for ongoing standardization efforts. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Arduino, Paolo G; D'Aiuto, Francesco; Cavallito, Claudio; Carcieri, Paola; Carbone, Mario; Conrotto, Davide; Defabianis, Patrizia; Broccoletti, Roberto
2011-12-01
Plasma cell gingivitis (PCG) is a rare, benign inflammatory condition of unclear etiology with no definitive standard of care ever reported to our knowledge. The aim of this case series is to ascertain the clinical efficacy of professional oral hygiene and periodontal therapy in younger individuals with a histologically confirmed diagnosis of PCG. All patients received non-surgical periodontal therapy, including oral hygiene instructions, and thorough supragingival scaling and polishing with the removal of all deposits and staining combined with the use of antimicrobials in a 9-week cohort study. Clinical outcome variables were recorded at baseline and 4 weeks after the intervention and included, as periodontal parameters, full-mouth plaque scores (FMPS), full-mouth bleeding scores (FMBS), the clinical extension of gingival involvement, and patient-related outcomes (visual analog score of pain). A total of 11 patients (six males and five females; mean age: 11 ± 0.86 years) were recruited. Four weeks after finishing the oral hygiene and periodontal therapy protocol, a statistically significant reduction was observed for FMPS (P = 0.000), FMBS (P = 0.000), reported pain (P = 0.003) and clinical gingival involvement (P = 0.003). Standard, professional oral hygiene procedures and non-surgical periodontal therapy including antimicrobials were associated with a marked improvement of clinical and patient-related outcomes in pediatric cases of PCG.
Adaptive Offset Correction for Intracortical Brain Computer Interfaces
Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.
2014-01-01
Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868
Adaptive offset correction for intracortical brain-computer interfaces.
Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R
2014-03-01
Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.
Cost effectiveness of nutrition support in the prevention of pressure ulcer in hospitals.
Banks, M D; Graves, N; Bauer, J D; Ash, S
2013-01-01
This study estimates the economic outcomes of a nutrition intervention to at-risk patients compared with standard care in the prevention of pressure ulcer. Statistical models were developed to predict 'cases of pressure ulcer avoided', 'number of bed days gained' and 'change to economic costs' in public hospitals in 2002-2003 in Queensland, Australia. Input parameters were specified and appropriate probability distributions fitted for: number of discharges per annum; incidence rate for pressure ulcer; independent effect of pressure ulcer on length of stay; cost of a bed day; change in risk in developing a pressure ulcer associated with nutrition support; annual cost of the provision of a nutrition support intervention for at-risk patients. A total of 1000 random re-samples were made and the results expressed as output probability distributions. The model predicts a mean 2896 (s.d. 632) cases of pressure ulcer avoided; 12, 397 (s.d. 4491) bed days released and corresponding mean economic cost saving of euros 2 869 526 (s.d. 2 078 715) with a nutrition support intervention, compared with standard care. Nutrition intervention is predicted to be a cost-effective approach in the prevention of pressure ulcer in at-risk patients.
Ethical dilemmas in psychiatric evaluations in patients with fulminant liver failure.
Appel, Jacob; Vaidya, Swapna
2014-04-01
Fulminant hepatic failure (FHF) is one of the more dramatic and challenging syndromes in clinical medicine. Time constraints and the scarcity of organs complicate the evaluation process in the case of patients presenting with FHF, raising ethical questions related to fairness and justice. The challenges are compounded by an absence of standardized guidelines. Acetaminophen overdose, often occurring in patients with histories of psychiatric illness and substance dependence, has emerged as the most common cause of FHF. The weak correlations between psychosocial factors and nonadherence, as per some studies, suggest that adherence may be influenced by systematic factors. Most research suggests that applying rigid ethical parameters in these patients, rather than allowing for case-dependent flexibility, can be problematic. The decision to transplant in patients with FHF has to be made in a very narrow window of time. The time-constrained process is fraught with uncertainties and limitations, given the absence of patient interview, fluctuating medical eligibility, and limited data. Although standardized scales exist, their benefit in such settings appears limited. Predicting compliance with posttransplant medical regimens is difficult to assess and raises the question of prospective studies to monitor compliance.
NASA Technical Reports Server (NTRS)
Hovis, Jeffrey S.; Brundidge, Kenneth C.
1987-01-01
A method of interpolating atmospheric soundings while reducing the errors associated with simple time interpolation was developed. The purpose of this was to provide a means to determine atmospheric stability at times between standard soundings and to relate changes in stability to intensity changes in an MCC. Four MCC cases were chosen for study with this method with four stability indices being included. The discussion centers on three aspects for each stability parameter examined: the stability field in the vicinity of the storm and its changes in structure and magnitude during the lifetime of the storm, the average stability within the storm boundary as a function of time and its relation to storm intensity, and the apparent flux of stability parameter into the storm as a consequence of low-level storm relative flow. It was found that the results differed among the four stability parameters, sometimes in a conflicting fashion. Thus, an interpolation of how the storm intensity is related to the changing environmental stability depends upon the particular index utilized. Some explanation for this problem is offered.
NASA Astrophysics Data System (ADS)
Patel, Nimit R.; Chhaniwal, Vani K.; Javidi, Bahram; Anand, Arun
2015-07-01
Development of devices for automatic identification of diseases is desired especially in developing countries. In the case of malaria, even today the gold standard is the inspection of chemically treated blood smears through a microscope. This requires a trained technician/microscopist to identify the cells in the field of view, with which the labeling chemicals gets attached. Bright field microscopes provide only low contrast 2D images of red blood cells and cell thickness distribution cannot be obtained. Quantitative phase contrast microscopes can provide both intensity and phase profiles of the cells under study. The phase information can be used to determine thickness profile of the cell. Since cell morphology is available, many parameters pertaining to the 3D shape of the cell can be computed. These parameters in turn could be used to decide about the state of health of the cell leading to disease diagnosis. Here the investigations done on digital holographic microscope, which provides quantitative phase images, for comparison of parameters obtained from the 3D shape profile of objects leading to identification of diseased samples is described.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Constraints on supersymmetric dark matter for heavy scalar superpartners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Peisi; Roglans, Roger A.; Spiegel, Daniel D.
2017-05-01
We study the constraints on neutralino dark matter in minimal low energy supersymmetry models and the case of heavy lepton and quark scalar superpartners. For values of the Higgsino and gaugino mass parameters of the order of the weak scale, direct detection experiments are already putting strong bounds on models in which the dominant interactions between the dark matter candidates and nuclei are governed by Higgs boson exchange processes, particularly for positive values of the Higgsino mass parameter mu. For negative values of mu, there can be destructive interference between the amplitudes associated with the exchange of the standard CP-evenmore » Higgs boson and the exchange of the nonstandard one. This leads to specific regions of parameter space which are consistent with the current experimental constraints and a thermal origin of the observed relic density. In this article, we study the current experimental constraints on these scenarios, as well as the future experimental probes, using a combination of direct and indirect dark matter detection and heavy Higgs and electroweakino searches at hadron colliders« less
Windowed multitaper correlation analysis of multimodal brain monitoring parameters.
Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.
Model with two periods of inflation
NASA Astrophysics Data System (ADS)
Schettler, Simon; Schaffner-Bielich, Jürgen
2016-01-01
A scenario with two subsequent periods of inflationary expansion in the very early Universe is examined. The model is based on a potential motivated by symmetries being found in field theory at high energy. For various parameter sets of the potential, the spectra of scalar and tensor perturbations that are expected to originate from this scenario are calculated. Also the beginning of the reheating epoch connecting the second inflation with thermal equilibrium is studied. Perturbations with wavelengths leaving the horizon around the transition between the two inflations are special: It is demonstrated that the power spectrum at such scales deviates significantly from expectations based on measurements of the cosmic microwave background. This supports the conclusion that parameters for which this part of the spectrum leaves observable traces in the cosmic microwave background must be excluded. Parameters entailing a very efficient second inflation correspond to standard small-field inflation and can meet observational constraints. Particular attention is paid to the case where the second inflation leads solely to a shift of the observable spectrum from the first inflation. A viable scenario requires this shift to be small.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakke, K., E-mail: kbakke@fisica.ufpb.br; Belich, H., E-mail: belichjr@gmail.com
2016-10-15
Based on the Standard Model Extension, we investigate relativistic quantum effects on a scalar particle in backgrounds of the Lorentz symmetry violation defined by a tensor field. We show that harmonic-type and linear-type confining potentials can stem from Lorentz symmetry breaking effects, and thus, relativistic bound state solutions can be achieved. We first analyse a possible scenario of the violation of the Lorentz symmetry that gives rise to a harmonic-type potential. In the following, we analyse another possible scenario of the breaking of the Lorentz symmetry that induces both harmonic-type and linear-type confining potentials. In this second case, we alsomore » show that not all values of the parameter associated with the intensity of the electric field are permitted in the search for polynomial solutions to the radial equation, where the possible values of this parameter are determined by the quantum numbers of the system and the parameters associated with the violation of the Lorentz symmetry.« less
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
The selection of Lorenz laser parameters for transmission in the SMF 3rd transmission window
NASA Astrophysics Data System (ADS)
Gajda, Jerzy K.; Niesterowicz, Andrzej; Zeglinski, Grzegorz
2003-10-01
The work presents simulation of transmission line results with the fiber standard ITU-T G.652. The parameters of Lorenz laser decide about electrical signal parameters like eye pattern, jitter, BER, S/N, Q-factor, scattering diagram. For a short line lasers with linewidth larger than 100MHz can be used. In the paper cases for 10 Gbit/s and 40 Gbit/s transmission and the fiber length 30km, 50km, and 70km are calculated. The average open eye patterns were 1*10-5-120*10-5. The Q factor was 10-23dB. In calcuations the bit error rate (BER) was 10-40-10-4. If the bandwidth of Lorenz laser increases from 10 MHz to 500MHz a distance of transmission decrease from 70km to 30km. Very important for transmission distance is a rate bit of transmitter. If a bit rate increase from 10Gbit/s to 40 Gbit/s, the transmission distance for the signal mode fiber G.652 will decrease from 70km to 5km.
The fingerprints of black holes—shadows and their degeneracies
NASA Astrophysics Data System (ADS)
Mars, Marc; Paganini, Claudio F.; Oancea, Marius A.
2018-01-01
We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr–Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation as well as the observer’s radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. However, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr–Newman–Taub–NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, Taub–NUT charge and elevation angle exists in this case.
Donkor, Eric S; Akumwena, Amos; Amoo, Philip K; Owolabi, Mayowa O; Aspelund, Thor; Gudnason, Vilmundur
2016-01-01
Background Infections are known to be a major complication of stroke patients. In this study, we evaluated the risk of community-acquired bacteriuria among stroke patients, the associated factors, and the causative organisms. Methods This was a cross-sectional study involving 70 stroke patients and 83 age- and sex-matched, apparently healthy controls. Urine specimens were collected from all the study subjects and were analyzed by standard microbiological methods. Demographic and clinical information was also collected from the study subjects. For stroke patients, the information collected also included stroke parameters, such as stroke duration, frequency, and subtype. Results Bacteriuria was significantly higher among stroke patients (24.3%, n=17) than among the control group (7.2%, n=6), with a relative risk of 3.36 (confidence interval [CI], 1.40–8.01, P=0.006). Among the control group, all six bacteriuria cases were asymptomatic, whereas the 17 stroke bacteriuria cases comprised 15 cases of asymptomatic bacteriuria and two cases of symptomatic bacteriuria. Female sex (OR, 3.40; CI, 1.12–10.30; P=0.03) and presence of stroke (OR, 0.24; CI, 0.08–0.70; P=0.009) were significantly associated with bacteriuria. The etiology of bacteriuria was similar in both study groups, and coagulase-negative Staphylococcus spp. were the most predominant organisms isolated from both stroke patients (12.9%) and the control group (2.4%). Conclusion Stroke patients in the study region have a significantly higher risk of community-acquired bacteriuria, which in most cases is asymptomatic. Community-acquired bacteriuria in stroke patients appears to have little or no relationship with clinical parameters of stroke such as stroke subtype, duration and frequency. PMID:27051289
Heterogeneity in acute undifferentiated leukemia.
LeMaistre, A; Childs, C C; Hirsch-Ginsberg, C; Reuben, J; Cork, A; Trujillo, J M; Andersson, B; McCredie, K B; Freireich, E; Stass, S A
1988-01-01
From January 1985 to May 1987, we studied 256 adults with newly diagnosed acute leukemia. Acute undifferentiated leukemia (AUL) was diagnosed in 12 of the 256 (4.6%) cases when lineage could not be delineated by light microscopy and light cytochemistry. To further characterize the blasts, immunophenotyping, ultrastructural myeloperoxidase (UMPO), and ultrastructural platelet peroxidase parameters were examined in 10, 11, and 6 of the 12 cases, respectively. Five cases demonstrated UMPO and were reclassified as acute myeloblastic leukemia (AML). Of the six UMPO-negative cases, three had a myeloid and one had a mixed immunophenotype. One UMPO-negative patient with a myeloid immunophenotype was probed for the immunoglobulin heavy chain gene (JH) and the beta chain of the T-cell receptor gene (Tcr beta) with no evidence of rearrangement. Six cases were treated with standard acute lymphoblastic leukemia (ALL) chemotherapy and failed to achieve complete remission (CR). Various AML chemotherapeutic regimens produced CR in only 3 of the 12 cases. One case was treated with gamma interferon and the other 2 with high-dose Ara-C. Our findings indicate a myeloid lineage can be detected by UMPO (5/12) in some cases of AUL. A germline configuration with JH and Tcr beta in one case as well as a myeloid immunophenotype in 3 UMPO-negative cases raises the possibility that myeloid lineage commitment may occur in the absence of myeloid peroxidase (MPO) cytochemical positivity.
El Niño Southern Oscillation (ENSO) and dysentery in Shandong province, China.
Zhang, Ying; Bi, Peng; Wang, Guoyong; Hiller, Janet E
2007-01-01
To investigate the impact of the El Niño Southern Oscillation (ENSO) on dysentery transmission, the relationship between monthly dysentery cases in Shandong Province of China and the monthly Southern Oscillation Index (SOI), a broad index of ENSO, was examined over the period 1991-2003. Spearman correlations and generalized linear models were calculated to detect the association between the SOI and dysentery cases. Data from 1991 to 2001 were used to estimate the parameters, while data from 2002 to 2003 were used to test the forecasting ability of the model. After controlling for seasonality, autocorrelation, and a time-lagged effect, the results indicate that there was a significant negative association between the number of dysentery cases and the SOI, with a lagged effect of 2 months. A one-standard-deviation decrease in the SOI might cause up to 207 more dysentery cases per month in Shandong Province. This is the first report of the impact of the Southern Oscillation on dysentery risk in China, indicating that the SOI may be a useful early indicator of potential dysentery risk in Shandong Province.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
NASA Astrophysics Data System (ADS)
Jungman, Gerard
1992-11-01
Yukawa-coupling-constant unification together with the known fermion masses is used to constrain SO(10) models. We consider the case of one (heavy) generation, with the tree-level relation mb=mτ, calculating the limits on the intermediate scales due to the known limits on fermion masses. This analysis extends previous analyses which addressed only the simplest symmetry-breaking schemes. In the case where the low-energy model is the standard model with one Higgs doublet, there are very strong constraints due to the known limits on the top-quark mass and the τ-neutrino mass. The two-Higgs-doublet case is less constrained, though we can make progress in constraining this model also. We identify those parameters to which the viability of the model is most sensitive. We also discuss the ``triviality'' bounds on mt obtained from the analysis of the Yukawa renormalization-group equations. Finally we address the role of a speculative constraint on the τ-neutrino mass, arising from the cosmological implications of anomalous B+L violation in the early Universe.
Evaluation of “Autotune” calibration against manual calibration of building energy models
Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...
2016-08-26
Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less
NASA Astrophysics Data System (ADS)
Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.
2015-07-01
We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Parametric study of transport aircraft systems cost and weight
NASA Technical Reports Server (NTRS)
Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.
1977-01-01
The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.
Entropy Inequalities for Stable Densities and Strengthened Central Limit Theorems
NASA Astrophysics Data System (ADS)
Toscani, Giuseppe
2016-10-01
We consider the central limit theorem for stable laws in the case of the standardized sum of independent and identically distributed random variables with regular probability density function. By showing decay of different entropy functionals along the sequence we prove convergence with explicit rate in various norms to a Lévy centered density of parameter λ >1 . This introduces a new information-theoretic approach to the central limit theorem for stable laws, in which the main argument is shown to be the relative fractional Fisher information, recently introduced in Toscani (Ricerche Mat 65(1):71-91, 2016). In particular, it is proven that, with respect to the relative fractional Fisher information, the Lévy density satisfies an analogous of the logarithmic Sobolev inequality, which allows to pass from the monotonicity and decay to zero of the relative fractional Fisher information in the standardized sum to the decay to zero in relative entropy with an explicit decay rate.
Excise tax avoidance: the case of state cigarette taxes.
DeCicca, Philip; Kenkel, Donald; Liu, Feng
2013-12-01
We conduct an applied welfare economics analysis of cigarette tax avoidance. We develop an extension of the standard formula for the optimal Pigouvian corrective tax to incorporate the possibility that consumers avoid the tax by making purchases in nearby lower tax jurisdictions. To provide a key parameter for our formula, we estimate a structural endogenous switching regression model of border-crossing and cigarette prices. In illustrative calculations, we find that for many states, after taking into account tax avoidance the optimal tax is at least 20% smaller than the standard Pigouvian tax that simply internalizes external costs. Our empirical estimate that tax avoidance strongly responds to the price differential is the main reason for this result. We also use our results to examine the benefits of replacing avoidable state excise taxes with a harder-to-avoid federal excise tax on cigarettes. Copyright © 2013 Elsevier B.V. All rights reserved.
Excise Tax Avoidance: The Case of State Cigarette Taxes
DeCicca, Philip; Kenkel, Donald; Liu, Feng
2013-01-01
We conduct an applied welfare economics analysis of cigarette tax avoidance. We develop an extension of the standard formula for the optimal Pigouvian corrective tax to incorporate the possibility that consumers avoid the tax by making purchases in nearby lower-tax jurisdictions. To provide a key parameter for our formula, we estimate a structural endogenous switching regression model of border-crossing and cigarette prices. In illustrative calculations, we find that for many states, after taking into account tax avoidance the optimal tax is at least 20 percent smaller than the standard Pigouvian tax that simply internalizes external costs. Our empirical estimate that tax avoidance strongly responds to the price differential is the main reason for this result. We also use our results to examine the benefits of replacing avoidable state excise taxes with a harder-to-avoid federal excise tax on cigarettes. PMID:24140760
Model identification of new heavy Z‧ bosons at ILC with polarized beams
NASA Astrophysics Data System (ADS)
Pankov, A. A.; Tsytrinov, A. V.
2017-12-01
Extra neutral gauge bosons, Z‧s, are predicted by many theoretical scenarios of physics beyond the Standard Model, and intensive searches for their signatures will be performed at present and future high energy colliders. It is quite possible that Z‧s are heavy enough to lie beyond the discovery reach expected at the CERN Large Hadron Collider LHC, in which case only indirect signatures of Z‧ exchanges may occur at future colliders, through deviations of the measured cross sections from the Standard Model predictions. We here discuss in this context the expected sensitivity to Z‧ parameters of fermion-pair production cross sections at the planned International Linear Collider (ILC), especially as regards the potential of distinguishing different Z‧ models once such deviations are observed. Specifically, we evaluate the discovery and identification reaches on Z‧ gauge bosons pertinent to the E 6, LR, ALR, and SSM classes of models at the ILC.
Updated Bs-mixing constraints on new physics models for b →s ℓ+ℓ- anomalies
NASA Astrophysics Data System (ADS)
Di Luzio, Luca; Kirk, Matthew; Lenz, Alexander
2018-05-01
Many new physics models that explain the intriguing anomalies in the b -quark flavor sector are severely constrained by Bs mixing, for which the Standard Model prediction and experiment agreed well until recently. The most recent Flavour Lattice Averaging Group (FLAG) average of lattice results for the nonperturbative matrix elements points, however, in the direction of a small discrepancy in this observable Cabibbo-Kobayashi-Maskawa (CKM). Using up-to-date inputs from standard sources such as PDG, FLAG and one of the two leading CKM fitting groups to determine Δ MsSM, we find a severe reduction of the allowed parameter space of Z' and leptoquark models explaining the B anomalies. Remarkably, in the former case the upper bound on the Z' mass approaches dangerously close to the energy scales already probed by the LHC. We finally identify some model-building directions in order to alleviate the tension with Bs mixing.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Gómez, Luisa F; Torres, Isaura P; Jiménez-A, María Del Pilar; McEwen, Juan Gmo; de Bedout, Catalina; Peláez, Carlos A; Acevedo, José M; Taylor, María L; Arango, Myrtha
2018-05-01
Histoplasma capsulatum is the causative agent of histoplasmosis and this fungus inhabits soils rich in phosphorus and nitrogen that are enriched with bird and bat manure. The replacement of organic matter in agroecosystems is necessary in the tropics, and the use of organic fertilizers has increased. Cases and outbreaks due to the presence of the fungus in these components have been reported. The Instituto Colombiano Agropecuario resolution 150 of 2003 contains the parameters set by the Colombian Technical Standard (NTC 5167) on the physicochemical and microbiological features of fertilizers, but it does not regulate the search for H. capsulatum . The aim of this study was to demonstrate H. capsulatum presence in organic fertilizers by nested polymerase chain reaction (PCR). A total of 239 samples were collected: 201 (84.1%) corresponded to organic fertilizers, 30 (12.5%) to bird excrement, and 8 (3.4%) to cave soils. The Hc100 nested PCR had a detection limit of 0.1 pg/µL and a specificity of 100%. A total of 25 (10.5%) samples were positive and validated by sequencing. Seven of the positive samples represented locations where H. capsulatum was previously detected, suggesting the persistence of the fungus. No significant correlations were detected between the physicochemical and microbiological parameters with the presence of H. capsulatum by nested PCR, indicating the fungus existence in organic fertilizers that complied with the NTC 5167. The Hc100 nested PCR targeting H. capsulatum standardized in this work will improve the evaluation of organic fertilizers and ensure the prevention of outbreaks and cases due to manufacturing, marketing, and use of fertilizers contaminated with H. capsulatum .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Kyu Jung; Baer, Howard; Serce, Hasan
The supersymmetrized DFSZ axion model is highly motivated not only because it offers solutions to both the gauge hierarchy and strong CP problems, but also because it provides a solution to the SUSY μ-problem which naturally allows for a Little Hierarchy. We compute the expected mixed axion-neutralino dark matter abundance for the SUSY DFSZ axion model in two benchmark cases—a natural SUSY model with a standard neutralino underabundance (SUA) and an mSUGRA/CMSSM model with a standard overabundance (SOA). Our computation implements coupled Boltzmann equations which track the radiation density along with neutralino, axion, axion CO (produced via coherent oscillations), saxion,more » saxion CO, axino and gravitino densities. In the SUSY DFSZ model, axions, axinos and saxions go through the process of freeze-in—in contrast to freeze-out or out-of-equilibrium production as in the SUSY KSVZ model—resulting in thermal yields which are largely independent of the re-heat temperature. We find the SUA case with suppressed saxion-axion couplings (ξ=0) only admits solutions for PQ breaking scale f{sub a}∼< 6× 10{sup 12} GeV where the bulk of parameter space tends to be axion-dominated. For SUA with allowed saxion-axion couplings (ξ =1), then f{sub a} values up to ∼ 10{sup 14} GeV are allowed. For the SOA case, almost all of SUSY DFSZ parameter space is disallowed by a combination of overproduction of dark matter, overproduction of dark radiation or violation of BBN constraints. An exception occurs at very large f{sub a}∼ 10{sup 15}–10{sup 16} GeV where large entropy dilution from CO-produced saxions leads to allowed models.« less
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182
DOE Office of Scientific and Technical Information (OSTI.GOV)
de la Puente, Alejandro
In this work, I present a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), with an explicit μ-term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM). I analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. The small and large limits of this mass parameter are studied, and I find that I can generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplingsmore » perturbative up to the gauge unification scale, and with no need to fine tune parameters in the scalar potential. This model, which I call the S-MSSM is also embedded in a gauge-mediated supersymmetry breaking scheme. I find that even with a minimal embedding of the S-MSSM into a gauge mediated scheme, the mass for the lightest Higgs boson can easily be above 114 GeV, while keeping the top squarks below the TeV scale. Furthermore, I also study the forward-backward asymmetry in the t¯t system within the framework of the S-MSSM. For this purpose, non-renormalizable couplings between the first and third generation of quarks to scalars are introduced. The two limiting cases of the S-MSSM, characterized by the size of the supersymmetric mass for the singlet superfield is analyzed, and I find that in the region of small singlet supersymmetric mass a large asymmetry can be obtained while being consistent with constraints arising from flavor physics, quark masses and top quark decays.« less
Standardizing Type Ia supernovae optical brightness using near-infrared rebrightening time
NASA Astrophysics Data System (ADS)
Shariff, H.; Dhawan, S.; Jiao, X.; Leibundgut, B.; Trotta, R.; van Dyk, D. A.
2016-12-01
Accurate standardization of Type Ia supernovae (SNIa) is instrumental to the usage of SNIa as distance indicators. We analyse a homogeneous sample of 22 low-z SNIa, observed by the Carnegie Supernova Project in the optical and near-infrared (NIR). We study the time of the second peak in the J band, t2, as an alternative standardization parameter of SNIa peak optical brightness, as measured by the standard SALT2 parameter mB. We use BAHAMAS, a Bayesian hierarchical model for SNIa cosmology, to estimate the residual scatter in the Hubble diagram. We find that in the absence of a colour correction, t2 is a better standardization parameter compared to stretch: t2 has a 1σ posterior interval for the Hubble residual scatter of σΔμ = {0.250, 0.257} mag, compared to σΔμ = {0.280, 0.287} mag when stretch (x1) alone is used. We demonstrate that when employed together with a colour correction, t2 and stretch lead to similar residual scatter. Using colour, stretch and t2 jointly as standardization parameters does not result in any further reduction in scatter, suggesting that t2 carries redundant information with respect to stretch and colour. With a much larger SNIa NIR sample at higher redshift in the future, t2 could be a useful quantity to perform robustness checks of the standardization procedure.
Sleep in patients with remitted bipolar disorders: a meta-analysis of actigraphy studies.
Geoffroy, P A; Scott, J; Boudebesse, C; Lajnef, M; Henry, C; Leboyer, M; Bellivier, F; Etain, B
2015-02-01
Sleep dysregulation is highly prevalent in bipolar disorders (BDs), with previous actigraphic studies demonstrating sleep abnormalities during depressive, manic, and interepisode periods. We undertook a meta-analysis of published actigraphy studies to identify whether any abnormalities in the reported sleep profiles of remitted BD cases differ from controls. A systematic review identified independent studies that were eligible for inclusion in a random effects meta-analysis. Effect sizes for actigraphy parameters were expressed as standardized mean differences (SMD) with 95% confidence intervals (95% CI). Nine of 248 identified studies met eligibility criteria. Compared with controls (N=210), remitted BD cases (N=202) showed significant differences in SMD for sleep latency (0.51 [0.28-0.73]), sleep duration (0.57 [0.30-0.84]), wake after sleep onset (WASO) (0.28 [0.06-0.50]) and sleep efficiency (-0.38 [-0.70-0.07]). Moderate heterogeneity was identified for sleep duration (I2=44%) and sleep efficiency (I2=44%). Post hoc meta-regression analyses demonstrated that larger SMD for sleep duration were identified for studies with a greater age difference between BD cases and controls (β=0.22; P=0.03) and non-significantly lower levels of residual depressive symptoms in BD cases (β=-0.13; P=0.07). This meta-analysis of sleep in remitted bipolar disorder highlights disturbances in several sleep parameters. Future actigraphy studies should pay attention to age matching and levels of residual depressive symptoms. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Benchmarking in Thoracic Surgery. Third Edition.
Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás
2016-04-01
Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.
Comparison of vocal outcomes after angiolytic laser surgery and microflap surgery for vocal polyps.
Mizuta, Masanobu; Hiwatashi, Nao; Kobayashi, Toshiki; Kaneko, Mami; Tateya, Ichiro; Hirano, Shigeru
2015-12-01
The microflap technique is a standard procedure for the treatment of vocal fold polyps. Angiolytic laser surgery carried out under topical anesthesia is an alternative method for vocal polyp removal. However, it is not clear whether angiolytic laser surgery has the same effects on vocal outcomes as the microflap technique because of a lack of studies comparing both procedures. In the current study, vocal outcomes after both procedures were compared to clarify the effects of angiolytic laser surgery for vocal polyp removal. Vocal outcomes were reviewed for patients who underwent angiolytic laser surgery (n=20, laser group) or microflap surgery (n=34, microflap group) for vocal polyp removal. The data analyzed included patient and lesion characteristics, number of surgeries required for complete resolution, and aerodynamic and acoustic examinations before and after surgery. In the laser surgery group, complete resolution of the lesion was achieved with a single procedure in 17 cases (85%) and with two procedures in 3 cases (15%). Postoperative aerodynamic and acoustic parameters demonstrated significant improvement compared to preoperative parameters in both the laser surgery group and the microflap surgery group. There were no significant differences in any postoperative aerodynamic and acoustic parameters between the two groups. The current retrospective study demonstrated that angiolytic laser surgery achieved complete resolution of vocal polyps within two procedures. Postoperative effects on aerodynamic and acoustic functions were similar to those after microflap surgery. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Dynamic large eddy simulation: Stability via realizability
NASA Astrophysics Data System (ADS)
Mokhtarpoor, Reza; Heinz, Stefan
2017-10-01
The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.
Rao, Harsha L; Yadav, Ravi K; Addepalli, Uday K; Begum, Viquar U; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S
2015-08-01
To evaluate the relationship between the reference standard used to diagnose glaucoma and the diagnostic ability of spectral domain optical coherence tomograph (SDOCT). In a cross-sectional study, 280 eyes of 175 consecutive subjects, referred to a tertiary eye care center for glaucoma evaluation, underwent optic disc photography, visual field (VF) examination, and SDOCT examination. The cohort was divided into glaucoma and control groups based on 3 reference standards for glaucoma diagnosis: first based on the optic disc classification (179 glaucoma and 101 control eyes), second on VF classification (glaucoma hemifield test outside normal limits and pattern SD with P-value of <5%, 130 glaucoma and 150 control eyes), and third on the presence of both glaucomatous optic disc and glaucomatous VF (125 glaucoma and 155 control eyes). Relationship between the reference standards and the diagnostic parameters of SDOCT were evaluated using areas under the receiver operating characteristic curve, sensitivity, and specificity. Areas under the receiver operating characteristic curve and sensitivities of most of the SDOCT parameters obtained with the 3 reference standards (ranging from 0.74 to 0.88 and 72% to 88%, respectively) were comparable (P>0.05). However, specificities of SDOCT parameters were significantly greater (P<0.05) with optic disc classification as reference standard (74% to 88%) compared with VF classification as reference standard (57% to 74%). Diagnostic parameters of SDOCT that was significantly affected by reference standard was the specificity, which was greater with optic disc classification as the reference standard. This has to be considered when comparing the diagnostic ability of SDOCT across studies.
Measuring systems of hard to get objects: problems with analysis of measurement results
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2005-02-01
The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.
A general diagnostic model applied to language testing data.
von Davier, Matthias
2008-11-01
Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.
Thermal barrier coatings on gas turbine blades: Chemical vapor deposition (Review)
NASA Astrophysics Data System (ADS)
Igumenov, I. K.; Aksenov, A. N.
2017-12-01
Schemes are presented for experimental setups (reactors) developed at leading scientific centers connected with the development of technologies for the deposition of coatings using the CVD method: at the Technical University of Braunschweig (Germany), the French Aerospace Research Center, the Materials Research Institute (Tohoku University, Japan) and the National Laboratory Oak Ridge (USA). Conditions and modes for obtaining the coatings with high operational parameters are considered. It is established that the formed thermal barrier coatings do not fundamentally differ in their properties (columnar microstructure, thermocyclic resistance, thermal conductivity coefficient) from standard electron-beam condensates, but the highest growth rates and the perfection of the crystal structure are achieved in the case of plasma-chemical processes and in reactors with additional laser or induction heating of a workpiece. It is shown that CVD reactors can serve as a basis for the development of rational and more advanced technologies for coating gas turbine blades that are not inferior to standard electron-beam plants in terms of the quality of produced coatings and have a much simpler and cheaper structure. The possibility of developing a new technology based on CVD processes for the formation of thermal barrier coatings with high operational parameters is discussed, including a set of requirements for industrial reactors, high-performance sources of vapor precursors, and promising new materials.
Model-independent reconstruction of f( T) teleparallel cosmology
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2017-11-01
We propose a model-independent formalism to numerically solve the modified Friedmann equations in the framework of f( T) teleparallel cosmology. Our strategy is to expand the Hubble parameter around the redshift z=0 up to a given order and to adopt cosmographic bounds as initial settings to determine the corresponding f(z)≡ f(T(H(z))) function. In this perspective, we distinguish two cases: the first expansion is up to the jerk parameter, the second expansion is up to the snap parameter. We show that inside the observed redshift domain z≤ 1, only the net strength of f( z) is modified passing from jerk to snap, whereas its functional behavior and shape turn out to be identical. As first step, we set the cosmographic parameters by means of the most recent observations. Afterwards, we calibrate our numerical solutions with the concordance Λ CDM model. In both cases, there is a good agreement with the cosmological standard model around z≤ 1, with severe discrepancies outer of this limit. We demonstrate that the effective dark energy term evolves following the test-function: f(z)=A+B{z}^2e^{Cz}. Bounds over the set A, B, C are also fixed by statistical considerations, comparing discrepancies between f( z) with data. The approach opens the possibility to get a wide class of test-functions able to frame the dynamics of f( T) without postulating any model a priori. We thus re-obtain the f( T) function through a back-scattering procedure once f( z) is known. We figure out the properties of our f( T) function at the level of background cosmology, to check the goodness of our numerical results. Finally, a comparison with previous cosmographic approaches is carried out giving results compatible with theoretical expectations.
Acoustical characterization and parameter optimization of polymeric noise control materials
NASA Astrophysics Data System (ADS)
Homsi, Emile N.
2003-10-01
The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.
Stolberg-Stolberg, Josef; Horn, Dagmar; Roßlenbroich, Steffen; Riesenbeck, Oliver; Kampmeier, Stefanie; Mohr, Michael; Raschke, Michael J; Hartensuer, René
2017-04-01
Candida induced spondylodiscitis of the cervical spine in immunocompetent patients is an extremely rare infectious complication. Since clinical symptoms might be nonspecific, therapeutic latency can lead to permanent spinal cord damage, sepsis and fatal complications. Surgical debridement is strongly recommended but there is no standard antimycotic regime for postsurgical treatment. This paper summarizes available data and demonstrates another successfully treated case. The systematic analysis was performed according to the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. PubMed and Web of Science were scanned to identify English language articles. Additionally, the authors describe the case of a 60-year-old male patient who presented with a Candida albicans induced cervical spondylodiscitis after an edematous pancreatitis and C. albicans sepsis. Anterior cervical corpectomy and fusion of C4-C6, additional anterior plating, as well as posterior stabilization C3-Th1 was followed by a 6-month antimycotic therapy. There was neither funding nor conflict of interests. A systematic literature analysis was conducted and 4599 articles on spondylodiscitis were scanned. Only four cases were found reporting about a C. albicans spondylodiscitis in a non-immunocompromised patient. So far, our patient was followed up for 2 years. Until now, he shows free of symptoms and infection parameters. Standard testing for immunodeficiency showed no positive results. Candida albicans spondylodiscitis of the cervical spine presents a potentially life-threatening disease. To our knowledge, this is the fifth case in literature that describes the treatment of C. albicans spondylodiscitis in an immunocompetent patient. Surgical debridement has to be considered, following antimycotic regime recommendations vary in pharmaceutical agents and treatment duration.
Rodríguez, Francisco J; Schlenger, Patrick; García-Valverde, María
2016-01-15
The main objective of this work is to conduct a comprehensive structural characterization of humic substances using the following experimental techniques: FTIR, 1H NMR and several UV–Vis parameters (Specific UV Absorbance at 254 nm or SUVA254, SUVA280, A400, the absorbance ratios A210/254, A250/365, A254/203, A254/436, A265/465, A270/400, A280/350, A465/665, the Absorbance Slope Index (ASI), the spectral slopes S275–295, S350–400 and the slope ratio SR). These UV–Vis parameters have also been correlated with key properties of humic substances such as aromaticity, molecular weight (MW) and trihalomethane formation potential (THMFP). An additional objective of this work is also to evaluate the usefulness of these techniques to monitor structural changes in humic substances produced by the ozonation treatment. Four humic substances were studied in this work: three of them were provided by the International Humic Substances Society (Suwannee River Fulvic Acid Standard: SRFA, Suwannee River Humic Acid Standard: SRHA and Nordic Reservoir Fulvic Acid Reference: NLFA) and the other one was a terrestrial humic acid widely used as a surrogate for aquatic humic substances in various studies (Aldrich Humic Acid: AHA). The UV–Vis parameters showing the best correlations with aromaticity in this study were SUVA254, SUVA280, A280/A350 ratio and A250/A364 ratio. The best correlations with molecular weight were for SUVA254, SUVA280 and A280/A350 ratio. Finally, in the case of the THMFP it was STHMFP-per mol HS the parameter showing good correlations with most of the UV–Vis parameters studied (especially with A280/A350 ratio, A265/A465 ratio and A270/A400 ratio) whereas STHMFP-per mg C showed poor correlations in most cases. On the whole, the UV–Vis parameter showing the best results was A280/A350 ratio as it showed excellent correlations for the three properties studied (aromaticity, MW and THMFP). A decrease in aromaticity following ozonation of humic substances can be readily monitored by 1H NMR and FTIR; the latter technique also allows to monitor an increase in carboxylic acidity with ozone dosage. This organic matter originated following ozonation (more aliphatic in character and more polar) is expected to be recalcitrant to further oxidation. The terrestrial humic acid (AHA) showed some structural differences with the aquatic humic substances and its behavior upon ozonation also differed in some extent from that shown by them.
NASA Technical Reports Server (NTRS)
1974-01-01
Shuttle simulation software modules in the environment, crew station, vehicle configuration and vehicle dynamics categories are discussed. For each software module covered, a description of the module functions and operational modes, its interfaces with other modules, its stored data, inputs, performance parameters and critical performance parameters is given. Reference data sources which provide standards of performance are identified for each module. Performance verification methods are also discussed briefly.
Human Resource Scheduling in Performing a Sequence of Discrete Responses
2009-02-28
each is a graph comparing simulated results of each respective model with data from Experiment 3b. As described below the parameters of the model...initiated in parallel with ongoing Central operations on another. To fix model parameters we estimated the range of times to perform the sum of the...standard deviation for each parameter was set to 50% of mean value. Initial simulations found no meaningful differences between setting the standard
Mustafa, Gulgun; Kursat, Fidanci Muzaffer; Ahmet, Tas; Alparslan, Genc Fatih; Omer, Gunes; Sertoglu, Erdem; Erkan, Sarı; Ediz, Yesilkaya; Turker, Turker; Ayhan, Kılıc
Childhood obesity is a worldwide health concern. Studies have shown autonomic dysfunction in obese children. The exact mechanism of this dysfunction is still unknown. The aim of this study was to assess the relationship between erythrocyte membrane fatty acid (EMFA) levels and cardiac autonomic function in obese children using heart rate variability (HRV). A total of 48 obese and 32 healthy children were included in this case-control study. Anthropometric and biochemical data, HRV indices, and EMFA levels in both groups were compared statistically. HRV parameters including standard deviation of normal-to-normal R-R intervals (NN), root mean square of successive differences, the number of pairs of successive NNs that differ by >50 ms (NN50), the proportion of NN50 divided by the total number of NNs, high-frequency power, and low-frequency power were lower in obese children compared to controls, implying parasympathetic impairment. Eicosapentaenoic acid and docosahexaenoic acid levels were lower in the obese group (p<0.001 and p=0.012, respectively). In correlation analysis, in the obese group, body mass index standard deviation and linoleic acid, arachidonic acid, triglycerides, and high-density lipoprotein levels showed a linear correlation with one or more HRV parameter, and age, eicosapentaenoic acid, and systolic and diastolic blood pressure correlated with mean heart rate. In linear regression analysis, age, dihomo-gamma-linolenic acid, linoleic acid, arachidonic acid, body mass index standard deviation, systolic blood pressure, triglycerides, low-density lipoprotein and high-density lipoprotein were related to HRV parameters, implying an effect on cardiac autonomic function. There is impairment of cardiac autonomic function in obese children. It appears that levels of EMFAs such as linoleic acid, arachidonic acid and dihomo-gamma-linolenic acid play a role in the regulation of cardiac autonomic function in obese children. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
NASA Astrophysics Data System (ADS)
Muramatsu, Chisako; Hatanaka, Yuji; Iwase, Tatsuhiko; Hara, Takeshi; Fujita, Hiroshi
2010-03-01
Abnormalities of retinal vasculatures can indicate health conditions in the body, such as the high blood pressure and diabetes. Providing automatically determined width ratio of arteries and veins (A/V ratio) on retinal fundus images may help physicians in the diagnosis of hypertensive retinopathy, which may cause blindness. The purpose of this study was to detect major retinal vessels and classify them into arteries and veins for the determination of A/V ratio. Images used in this study were obtained from DRIVE database, which consists of 20 cases each for training and testing vessel detection algorithms. Starting with the reference standard of vasculature segmentation provided in the database, major arteries and veins each in the upper and lower temporal regions were manually selected for establishing the gold standard. We applied the black top-hat transformation and double-ring filter to detect retinal blood vessels. From the extracted vessels, large vessels extending from the optic disc to temporal regions were selected as target vessels for calculation of A/V ratio. Image features were extracted from the vessel segments from quarter-disc to one disc diameter from the edge of optic discs. The target segments in the training cases were classified into arteries and veins by using the linear discriminant analysis, and the selected parameters were applied to those in the test cases. Out of 40 pairs, 30 pairs (75%) of arteries and veins in the 20 test cases were correctly classified. The result can be used for the automated calculation of A/V ratio.
Non-minimally coupled f(R) cosmology
NASA Astrophysics Data System (ADS)
Thakur, Shruti; Sen, Anjan A.; Seshadri, T. R.
2011-02-01
We investigate the consequences of non-minimal gravitational coupling to matter and study how it differs from the case of minimal coupling by choosing certain simple forms for the nature of coupling. The values of the parameters are specified at z=0 (present epoch) and the equations are evolved backwards to calculate the evolution of cosmological parameters. We find that the Hubble parameter evolves more slowly in non-minimal coupling case as compared to the minimal coupling case. In both the cases, the universe accelerates around present time, and enters the decelerating regime in the past. Using the latest Union2 dataset for supernova Type Ia observations as well as the data for baryon acoustic oscillation (BAO) from SDSS observations, we constraint the parameters of Linder exponential model in the two different approaches. We find that there is an upper bound on model parameter in minimal coupling. But for non-minimal coupling case, there is range of allowed values for the model parameter.
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Sensitivities to charged-current nonstandard neutrino interactions at DUNE
NASA Astrophysics Data System (ADS)
Bakhti, Pouya; Khan, Amir N.; Wang, W.
2017-12-01
We investigate the effects of charged-current (CC) nonstandard neutrino interactions (NSIs) at the source and at the detector in the simulated data for the planned Deep Underground Neutrino Experiment (DUNE). We neglect the neutral-current NSIs at the propagation because several solutions have already been proposed for resolving the degeneracies posed by neutral-current NSIs but no solutions exist for the degeneracies due to the CC NSIs. We study the effects of CC NSIs on the simultaneous measurements of {θ }23 and {δ }{{CP}} in DUNE. The analysis reveals that 3σ C.L. measurement of the correct octant of {θ }23 in the standard mixing scenario is spoiled if the CC NSIs are taken into account. Likewise, the CC NSIs can deteriorate the uncertainty of the {δ }{{CP}} measurement by a factor of two relative to that in the standard oscillation scenario. We also show that the source and the detector CC NSIs can induce a significant amount of fake CP-violation and the CP-conserving case can be excluded by more than 80% C.L. in the presence of fake CP-violation. We further find DUNE’s potential for constraining the relevant CC NSI parameters from the single parameter fits for both neutrino and antineutrino appearance and disappearance channels at both the near and far detectors. The results show that there could be improvements in the current bounds by at least one order of magnitude at DUNE’s near and far detectors, except for a few parameters which remain weaker at the far detector.
Using simulation to aid trial design: Ring-vaccination trials.
Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc
2017-03-01
The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.
NASA Astrophysics Data System (ADS)
Mortier, A.; Sousa, S. G.; Adibekyan, V. Zh.; Brandão, I. M.; Santos, N. C.
2014-12-01
Context. Precise stellar parameters (effective temperature, surface gravity, metallicity, stellar mass, and radius) are crucial for several reasons, amongst which are the precise characterization of orbiting exoplanets and the correct determination of galactic chemical evolution. The atmospheric parameters are extremely important because all the other stellar parameters depend on them. Using our standard equivalent-width method on high-resolution spectroscopy, good precision can be obtained for the derived effective temperature and metallicity. The surface gravity, however, is usually not well constrained with spectroscopy. Aims: We use two different samples of FGK dwarfs to study the effect of the stellar surface gravity on the precise spectroscopic determination of the other atmospheric parameters. Furthermore, we present a straightforward formula for correcting the spectroscopic surface gravities derived by our method and with our linelists. Methods: Our spectroscopic analysis is based on Kurucz models in local thermodynamic equilibrium, performed with the MOOG code to derive the atmospheric parameters. The surface gravity was either left free or fixed to a predetermined value. The latter is either obtained through a photometric transit light curve or derived using asteroseismology. Results: We find first that, despite some minor trends, the effective temperatures and metallicities for FGK dwarfs derived with the described method and linelists are, in most cases, only affected within the errorbars by using different values for the surface gravity, even for very large differences in surface gravity, so they can be trusted. The temperatures derived with a fixed surface gravity continue to be compatible within 1 sigma with the accurate results of the infrared flux method (IRFM), as is the case for the unconstrained temperatures. Secondly, we find that the spectroscopic surface gravity can easily be corrected to a more accurate value using a linear function with the effective temperature. Tables 1 and 2 are available in electronic form at http://www.aanda.org
Family Advocacy Program Standards and Self-Assessment Tool
1992-08-01
child abuse and neglect and spouse abuse. The standards are based upon a complete review of relevant criteria, accepted professional practices and current military FAP practices. Standards are... Child Abuse and Neglect Cases; Intervention and Treatment in Spouse Abuse Cases; Case Accountability in FAP Cases; Staffing for FAP Services;
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Much ado about two: reconsidering retransformation and the two-part model in health econometrics.
Mullahy, J
1998-06-01
In health economics applications involving outcomes (y) and covariates (x), it is often the case that the central inferential problems of interest involve E[y/x] and its associated partial effects or elasticities. Many such outcomes have two fundamental statistical properties: y > or = 0; and the outcome y = 0 is observed with sufficient frequency that the zeros cannot be ignored econometrically. This paper (1) describes circumstances where the standard two-part model with homoskedastic retransformation will fail to provide consistent inferences about important policy parameters; and (2) demonstrates some alternative approaches that are likely to prove helpful in applications.
Historical floods in the Dutch Rhine Delta
NASA Astrophysics Data System (ADS)
Glaser, R.; Stangl, H.
Historical records provide direct information about the climatic impact on society. Especially great natural disasters such as river floods have been for long attracting the attention of humankind. Time series for flood development on the Rhine branches Waal, Nederrijn/Lek and IJssel in the Dutch Rhine Delta are presented in this paper. In the case of the Waal it is even possible to compare historical flood frequencies based on documentary data with the recent development reconstructed from standardized instrumental measurements. In brief, we will also discuss various parameters concerning the structure of the flood series and the "human dimension" of natural disaster, i.e. the vulnerability of society when facing natural disasters.
Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem
NASA Astrophysics Data System (ADS)
Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.
2017-03-01
Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.
MCNP calculations for container inspection with tagged neutrons
NASA Astrophysics Data System (ADS)
Boghen, G.; Donzella, A.; Filippini, V.; Fontana, A.; Lunardon, M.; Moretto, S.; Pesente, S.; Zenoni, A.
2005-12-01
We are developing an innovative tagged neutrons inspection system (TNIS) for cargo containers: the system will allow us to assay the chemical composition of suspect objects, previously identified by a standard X-ray radiography. The operation of the system is extensively being simulated by using the MCNP Monte Carlo code to study different inspection geometries, cargo loads and hidden threat materials. Preliminary simulations evaluating the signal and the signal over background ratio expected as a function of the system parameters are presented. The results for a selection of cases are briefly discussed and demonstrate that the system can operate successfully in different filling conditions.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Salem, Jonathan A.
1998-01-01
The service life of structural ceramic components is often limited by the process of slow crack growth. Therefore, it is important to develop an appropriate testing methodology for accurately determining the slow crack growth design parameters necessary for component life prediction. In addition, an appropriate test methodology can be used to determine the influences of component processing variables and composition on the slow crack growth and strength behavior of newly developed materials, thus allowing the component process to be tailored and optimized to specific needs. At the NASA Lewis Research Center, work to develop a standard test method to determine the slow crack growth parameters of advanced ceramics was initiated by the authors in early 1994 in the C 28 (Advanced Ceramics) committee of the American Society for Testing and Materials (ASTM). After about 2 years of required balloting, the draft written by the authors was approved and established as a new ASTM test standard: ASTM C 1368-97, Standard Test Method for Determination of Slow Crack Growth Parameters of Advanced Ceramics by Constant Stress-Rate Flexural Testing at Ambient Temperature. Briefly, the test method uses constant stress-rate testing to determine strengths as a function of stress rate at ambient temperature. Strengths are measured in a routine manner at four or more stress rates by applying constant displacement or loading rates. The slow crack growth parameters required for design are then estimated from a relationship between strength and stress rate. This new standard will be published in the Annual Book of ASTM Standards, Vol. 15.01, in 1998. Currently, a companion draft ASTM standard for determination of the slow crack growth parameters of advanced ceramics at elevated temperatures is being prepared by the authors and will be presented to the committee by the middle of 1998. Consequently, Lewis will maintain an active leadership role in advanced ceramics standardization within ASTM. In addition, the authors have been and are involved with several international standardization organizations including the Versailles Project on Advanced Materials and Standards (VAMAS), the International Energy Agency (IEA), and the International Organization for Standardization (ISO). The associated standardization activities involve fracture toughness, strength, elastic modulus, and the machining of advanced ceramics.
Kunz, M; Urosevic-Maiwald, M; Goldinger, S M; Frauchiger, A L; Dreier, J; Belloni, B; Mangana, J; Jenni, D; Dippel, M; Cozzio, A; Guenova, E; Kamarachev, J; French, L E; Dummer, R
2016-02-01
Patients with severe oral lichen planus refractory to standard topical treatment currently have limited options of therapy suitable for long-term use. Oral alitretinoin (9-cis retinoic acid) was never systematically investigated in clinical trials, although case reports suggest its possible efficacy. To assess the efficacy and safety of oral alitretinoin taken at 30 mg once daily for up to 24 weeks in the treatment of severe oral lichen planus refractory to standard topical therapy. We conducted a prospective open-label single arm pilot study to test the efficacy and safety of 30 mg oral alitretinoin once daily for up to 24 weeks in severe oral lichen planus. Ten patients were included in the study. Primary end point was reduction in signs and symptoms measured by the Escudier severity score. Secondary parameters included pain and quality of life scores. Safety parameters were assessed during a follow-up period of 5 weeks. A substantial response at the end of treatment, i.e. >50% reduction in disease severity measured by the Escudier severity score, was apparent in 40% of patients. Therapy was well tolerated. Adverse events were mild and included headache, mucocutaneous dryness, musculoskeletal pain, increased thyroid-stimulating hormone and dyslipidaemia. Alitretinoin given at 30 mg daily reduced disease severity of severe oral lichen planus in a substantial proportion of patients refractory to standard treatment, was well tolerated and may thus represent one therapeutic option for this special group of patients. © 2015 European Academy of Dermatology and Venereology.
Assessment of total efficiency in adiabatic engines
NASA Astrophysics Data System (ADS)
Mitianiec, W.
2016-09-01
The paper presents influence of ceramic coating in all surfaces of the combustion chamber of SI four-stroke engine on working parameters mainly on heat balance and total efficiency. Three cases of engine were considered: standard without ceramic coating, fully adiabatic combustion chamber and engine with different thickness of ceramic coating. Consideration of adiabatic or semi-adiabatic engine was connected with mathematical modelling of heat transfer from the cylinder gas to the cooling medium. This model takes into account changeable convection coefficient based on the experimental formulas of Woschni, heat conductivity of multi-layer walls and also small effect of radiation in SI engines. The simulation model was elaborated with full heat transfer to the cooling medium and unsteady gas flow in the engine intake and exhaust systems. The computer program taking into account 0D model of engine processes in the cylinder and 1D model of gas flow was elaborated for determination of many basic engine thermodynamic parameters for Suzuki DR-Z400S 400 cc SI engine. The paper presents calculation results of influence of the ceramic coating thickness on indicated pressure, specific fuel consumption, cooling and exhaust heat losses. Next it were presented comparisons of effective power, heat losses in the cooling and exhaust systems, total efficiency in function of engine rotational speed and also comparison of temperature inside the cylinder for standard, semi-adiabatic and full adiabatic engine. On the basis of the achieved results it was found higher total efficiency of adiabatic engines at 2500 rpm from 27% for standard engine to 37% for full adiabatic engine.
Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.
2017-01-01
We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jae Hyeok; Essig, Rouven; McDermott, Samuel D.
We consider the constraints from Supernova 1987A on particles with small couplings to the Standard Model. We discuss a model with a fermion coupled to a dark photon, with various mass relations in the dark sector; millicharged particles; dark-sector fermions with inelastic transitions; the hadronic QCD axion; and an axion-like particle that couples to Standard Model fermions with couplings proportional to their mass. In the fermion cases, we develop a new diagnostic for assessing when such a particle is trapped at large mixing angles. Our bounds for a fermion coupled to a dark photon constrain small couplings and masses <200more » MeV, and do not decouple for low fermion masses. They exclude parameter space that is otherwise unconstrained by existing accelerator-based and direct-detection searches. In addition, our bounds are complementary to proposed laboratory searches for sub-GeV dark matter, and do not constrain several "thermal" benchmark-model targets. For a millicharged particle, we exclude charges between 10^(-9) to a few times 10^(-6) in units of the electron charge; this excludes parameter space to higher millicharges and masses than previous bounds. For the QCD axion and an axion-like particle, we apply several updated nuclear physics calculations and include the energy dependence of the optical depth to accurately account for energy loss at large couplings. We rule out a hadronic axion of mass between 0.1 and a few hundred eV, or equivalently bound the PQ scale between a few times 10^4 and 10^8 GeV, closing the hadronic axion window. For an axion-like particle, our bounds disfavor decay constants between a few times 10^5 GeV up to a few times 10^8 GeV. In all cases, our bounds differ from previous work by more than an order of magnitude across the entire parameter space. We also provide estimated systematic errors due to the uncertainties of the progenitor.« less
TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS
NASA Technical Reports Server (NTRS)
Hops, J. M.
1994-01-01
A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
Graviton creation by small scale factor oscillations in an expanding universe
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Ford, L. H.
2016-10-01
We treat quantum creation of gravitons by small scale factor oscillations around the average of an expanding universe. Such oscillations can arise in standard general relativity due to oscillations of a homogeneous, minimally coupled scalar field. They can also arise in modified gravity theories with a term proportional to the square of the Ricci scalar in the gravitational action. The graviton wave equation is different in the two cases, leading to somewhat different creation rates. Both cases are treated using a perturbative method due to Birrell and Davies, involving an expansion in a conformal coupling parameter to calculate the number density and energy density of the created gravitons. Cosmological constraints on the present graviton energy density and the dimensionless amplitude of the oscillations are discussed. We also discuss decoherence of quantum systems produced by the spacetime geometry fluctuations due to such a graviton bath.
Measurement of $$WW/WZ \\rightarrow \\ell \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
This paper presents a study of the production of WW or WZ boson pairs, with one W boson decaying to e? or µv and one W or Z boson decaying hadronically. The analysis uses 20.2fb -1 of s=8TeVpp collision data, collected by the ATLAS detector at the Large Hadron Collider. Cross-sections for WW / WZ production are measured in high-p T fiducial regions defined close to the experimental event selection. The cross-section is measured for the case where the hadronically decaying boson is reconstructed as two resolved jets, and the case where it is reconstructed as a single jet. The transverse momentummore » distribution of the hadronically decaying boson is used to search for new physics. Observations are consistent with the Standard Model predictions, and 95% confidence intervals are calculated for parameters describing anomalous triple gauge-boson couplings.« less
Measurement of $$WW/WZ \\rightarrow \\ell \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
This article presents a study of the production of WW or WZ boson pairs, with one W boson decaying to eν or μν and one W or Z boson decaying hadronically. The analysis uses 20.2 fb -1 of s√=8 TeV pp collision data, collected by the ATLAS detector at the Large Hadron Collider. Cross-sections for WW / WZ production are measured in high- pTpT fiducial regions defined close to the experimental event selection. The cross-section is measured for the case where the hadronically decaying boson is reconstructed as two resolved jets, and the case where it is reconstructed asmore » a single jet. The transverse momentum distribution of the hadronically decaying boson is used to search for new physics. Observations are consistent with the Standard Model predictions, and 95% confidence intervals are calculated for parameters describing anomalous triple gauge-boson couplings.« less
Nomenclature in laboratory robotics and automation (IUPAC Recommendation 1994)
(Skip) Kingston, H. M.; Kingstonz, M. L.
1994-01-01
These recommended terms have been prepared to help provide a uniform approach to terminology and notation in laboratory automation and robotics. Since the terminology used in laboratory automation and robotics has been derived from diverse backgrounds, it is often vague, imprecise, and in some cases, in conflict with classical automation and robotic nomenclature. These dejinitions have been assembled from standards, monographs, dictionaries, journal articles, and documents of international organizations emphasizing laboratory and industrial automation and robotics. When appropriate, definitions have been taken directly from the original source and identified with that source. However, in some cases no acceptable definition could be found and a new definition was prepared to define the object, term, or action. Attention has been given to defining specific robot types, coordinate systems, parameters, attributes, communication protocols and associated workstations and hardware. Diagrams are included to illustrate specific concepts that can best be understood by visualization. PMID:18924684
A relation between deformed superspace and Lee-Wick higher-derivative theories
NASA Astrophysics Data System (ADS)
Dias, M.; Ferrari, A. F.; Palechor, C. A.; Senise, C. R., Jr.
2015-07-01
We propose a non-anticommutative superspace that relates to the Lee-Wick type of higher-derivative theories, which are known for their interesting properties and have led to proposals of phenomenologically viable higher-derivative extensions of the Standard Model. The deformation of superspace we consider does not preserve supersymmetry or associativity in general, but, we show that a non-anticommutative version of the Wess-Zumino model can be properly defined. In fact, the definition of chiral and antichiral superfields turns out to be simpler in our case than in the well known N=1/2 supersymmetric case. We show that when the theory is truncated at the first nontrivial order in the deformation parameter, supersymmetry is restored, and we end up with a well-known Lee-Wick type of higher-derivative extension of the Wess-Zumino model. Thus, we show how non-anticommutativity could provide an alternative mechanism for generating these higher-derivative theories.
Measurement of $$WW/WZ \\rightarrow \\ell \
Aaboud, M.; Aad, G.; Abbott, B.; ...
2017-08-20
This paper presents a study of the production of WW or WZ boson pairs, with one W boson decaying to e? or µv and one W or Z boson decaying hadronically. The analysis uses 20.2fb -1 of s=8TeVpp collision data, collected by the ATLAS detector at the Large Hadron Collider. Cross-sections for WW / WZ production are measured in high-p T fiducial regions defined close to the experimental event selection. The cross-section is measured for the case where the hadronically decaying boson is reconstructed as two resolved jets, and the case where it is reconstructed as a single jet. The transverse momentummore » distribution of the hadronically decaying boson is used to search for new physics. Observations are consistent with the Standard Model predictions, and 95% confidence intervals are calculated for parameters describing anomalous triple gauge-boson couplings.« less
Effect of clustering on the emission of light charged particles
NASA Astrophysics Data System (ADS)
Kundu, Samir; Bhattacharya, C.; Rana, T. K.; Bhattacharya, S.; Pandey, R.; Banerjee, K.; Roy, Pratap; Meena, J. K.; Mukherjee, G.; Ghosh, T. K.; Mukhopadhyay, S.; Saha, A. K.; Sahoo, J. K.; Mandal Saha, R.; Srivastava, V.; Sinha, M.; Asgar, Md. A.
2018-04-01
Energy spectra of light charged particles emitted in the reaction p + {}^{27}Al → {}^{28}Si^{\\ast} have been studied and compared with statistical model calculation. Unlike 16O + 12C where a large deformation was observed in 28Si*, the energy spectra of α-particles were well explained by the statistical model calculation with standard "deformability parameters" obtained using the rotating liquid drop model. It seems that the α-clustering in the entrance channel causes extra deformation in 28Si* in the case of 16O + 12C, but the reanalysis of other published data shows that there are several cases where extra deformation was observed for composites produced via non-α-cluster entrance channels also. An empirical relation was found between mass-asymmetry in the entrance channel and deformation, which indicates that along with α-clustering, mass-asymmetry may also affect the emission of a light charged particle.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
An information geometric approach to least squares minimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark; Machta, Benjamin; Sethna, James
2009-03-01
Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.
2014-03-11
This final rule sets forth payment parameters and oversight provisions related to the risk adjustment, reinsurance, and risk corridors programs; cost sharing parameters and cost-sharing reductions; and user fees for Federally-facilitated Exchanges. It also provides additional standards with respect to composite premiums, privacy and security of personally identifiable information, the annual open enrollment period for 2015, the actuarial value calculator, the annual limitation in cost sharing for stand-alone dental plans, the meaningful difference standard for qualified health plans offered through a Federally-facilitated Exchange, patient safety standards for issuers of qualified health plans, and the Small Business Health Options Program.
Challenges for MSSM Higgs searches at hadron colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, Marcela S.; /Fermilab; Menon, A.
2007-04-01
In this article we analyze the impact of B-physics and Higgs physics at LEP on standard and non-standard Higgs bosons searches at the Tevatron and the LHC, within the framework of minimal flavor violating supersymmetric models. The B-physics constraints we consider come from the experimental measurements of the rare B-decays b {yields} s{gamma} and B{sub u} {yields} {tau}{nu} and the experimental limit on the B{sub s} {yields} {mu}{sup +}{mu}{sup -} branching ratio. We show that these constraints are severe for large values of the trilinear soft breaking parameter A{sub t}, rendering the non-standard Higgs searches at hadron colliders less promising.more » On the contrary these bounds are relaxed for small values of A{sub t} and large values of the Higgsino mass parameter {mu}, enhancing the prospects for the direct detection of non-standard Higgs bosons at both colliders. We also consider the available ATLAS and CMS projected sensitivities in the standard model Higgs search channels, and we discuss the LHC's ability in probing the whole MSSM parameter space. In addition we also consider the expected Tevatron collider sensitivities in the standard model Higgs h {yields} b{bar b} channel to show that it may be able to find 3 {sigma} evidence in the B-physics allowed regions for small or moderate values of the stop mixing parameter.« less
Surveying drinking water quality (Balikhlou River, Ardabil Province, Iran)
NASA Astrophysics Data System (ADS)
Aalipour erdi, Mehdi; Gasempour niari, Hassan; Mousavi Meshkini, Seyyed Reza; Foroug, Somayeh
2018-03-01
Considering the importance of Balikhlou River as one of the most important water sources of Ardabil, Nir and Sarein cities, maintaining water quality of this river is the most important goals in provincial and national levels. This river includes a wide area that provides agricultural, industrial and drinking water for the residents. Thus, surveying the quality of this river is important in planning and managing of region. This study examined the quality of river through eight physicochemical parameters (SO4, No3, BOD5, TDS, turbidity, pH, EC, COD) in two high- and low-water seasons by international and national standards in 2013. For this purpose, a review along the river has been done in five stations using t test and SPSS software. Model results showed that the amount difference in TDS and EC with WHO standards, and TDS rates with Iran standards in low-water seasons, pH and EC with WHO standards in high-water seasons, is not significant in high-water season; but for pH and SO4 parameters, turbidity and NO3 in both standards and EC value with WHO standard in low-water season and pH, EC, SO4 parameters and turbidity and NO3 in high-water season have significant difference from 5 to 1%, this shows the ideal limit and lowness of parameters for different usage.
Spectral Line-Shape Model to Replace the Voigt Profile in Spectroscopic Databases
NASA Astrophysics Data System (ADS)
Lisak, Daniel; Ngo, Ngoc Hoa; Tran, Ha; Hartmann, Jean-Michel
2014-06-01
The standard description of molecular line shapes in spectral databases and radiative transfer codes is based on the Voigt profile. It is well known that its simplified assumptions of absorber free motion and independence of collisional parameters from absorber velocity lead to systematic errors in analysis of experimental spectra, and retrieval of gas concentration. We demonstrate1,2 that the partially correlated quadratic speed-dependent hardcollision profile3. (pCqSDHCP) is a good candidate to replace the Voigt profile in the next generations of spectroscopic databases. This profile takes into account the following physical effects: the Doppler broadening, the pressure broadening and shifting of the line, the velocity-changing collisions, the speed-dependence of pressure broadening and shifting, and correlations between velocity- and phase/state-changing collisions. The speed-dependence of pressure broadening and shifting is incorporated into the pCqSDNGP in the so-called quadratic approximation. The velocity-changing collisions lead to the Dicke narrowing effect; however in many cases correlations between velocityand phase/state-changing collisions may lead to effective reduction of observed Dicke narrowing. The hard-collision model of velocity-changing collisions is also known as the Nelkin-Ghatak model or Rautian model. Applicability of the pCqSDHCP for different molecular systems was tested on calculated and experimental spectra of such molecules as H2, O2, CO2, H2O in a wide span of pressures. For all considered systems, pCqSDHCP is able to describe molecular spectra at least an order of magnitude better than the Voigt profile with all fitted parameters being linear with pressure. In the most cases pCqSDHCP can reproduce the reference spectra down to 0.2% or better, which fulfills the requirements of the most demanding remote-sensing applications. An important advantage of pCqSDHCP is that a fast algorithm for its computation was developedab4,5 and allows for its calculation only a few times slower than the standard Voigt profile. Moreover, the pCqSDHCP reduces to many simpler models commonly used in experimental spectra analysis simply by setting some parameters to zero, and it can be easily extended to incorporate the line-mixing effect in the first-order approximation. The idea of using pCqSDHCP as a standard profile to go beyond the Voigt profile for description of H2O line shapes was recently supported by the IUPAC task group6 which also recommended to call this profile with fast computation algorithm the HTP profile (for Hartmann-Tran).
Neutrino oscillations and Non-Standard Interactions
NASA Astrophysics Data System (ADS)
Farzan, Yasaman; Tórtola, Mariam
2018-02-01
Current neutrino experiments are measuring the neutrino mixing parameters with an unprecedented accuracy. The upcoming generation of neutrino experiments will be sensitive to subdominant oscillation effects that can give information on the yet-unknown neutrino parameters: the Dirac CP-violating phase, the mass ordering and the octant of θ_{23}. Determining the exact values of neutrino mass and mixing parameters is crucial to test neutrino models and flavor symmetries designed to predict these neutrino parameters. In the first part of this review, we summarize the current status of the neutrino oscillation parameter determination. We consider the most recent data from all solar experiments and the atmospheric data from Super-Kamiokande, IceCube and ANTARES. We also implement the data from the reactor neutrino experiments KamLAND, Daya Bay, RENO and Double Chooz as well as the long baseline neutrino data from MINOS, T2K and NOvA. If in addition to the standard interactions, neutrinos have subdominant yet-unknown Non-Standard Interactions (NSI) with matter fields, extracting the values of these parameters will suffer from new degeneracies and ambiguities. We review such effects and formulate the conditions on the NSI parameters under which the precision measurement of neutrino oscillation parameters can be distorted. Like standard weak interactions, the non-standard interaction can be categorized into two groups: Charged Current (CC) NSI and Neutral Current (NC) NSI. Our focus will be mainly on neutral current NSI because it is possible to build a class of models that give rise to sizeable NC NSI with discernible effects on neutrino oscillation. These models are based on new U(1) gauge symmetry with a gauge boson of mass ≲ 10 MeV. The UV complete model should be of course electroweak invariant which in general implies that along with neutrinos, charged fermions also acquire new interactions on which there are strong bounds. We enumerate the bounds that already exist on the electroweak symmetric models and demonstrate that it is possible to build viable models avoiding all these bounds. In the end, we review methods to test these models and suggest approaches to break the degeneracies in deriving neutrino mass parameters caused by NSI.
Zirconium: The material of the future in modern implantology.
Kubasiewicz-Ross, Paweł; Dominiak, Marzena; Gedrange, Tomasz; Botzenhart, Ute U
2017-01-01
The authors present the contemporary state of knowledge concerning alternative materials for dental implantology. First of all, factors influencing osseointegration are stated. The most important factors seem to be the type of implant surface. Among the numerous parameters describing them, the most important are: average roughness and porous density. Some studies proved that materials with comparable surface roughness provide similar osseointegration. In modern implantology titanium is the material still considered as a "gold standard". However, aesthetic features of titanium still bear several disadvantages, especially in the case of periodontium with a thin biotype in the anterior, aesthetic sensitive area of the jaw. If a titanium implant is used in such a case, the mucosa at the implant's neck may become grayish and, consequently limits the success of the overall treatment. That was the reason for seeking alternative materials to manufacture dental implants. Initiated by general medicine, mainly orthopedics, the search led to the discovery of zirconium dioxide used in dental implantology. A small number of complications, good chemical parameters, anticorrosion, mechanical strength, elasticity module close to the one of steel, and especially biocompatibility made zirconium a perfect material for this purpose, although this material presents several problems in achieving optimal roughness. In this overview one of the probable methods, a process of partial synterization, is presented.
NASA Astrophysics Data System (ADS)
Chinchan, Levon; Shevtsov, Sergey; Soloviev, Arcady; Shevtsova, Varvara; Huang, Jiun-Ping
The high-loaded parts of modern aircrafts and helicopters are often produced from polymeric composite materials. Such materials consist of reinforcing fibers, packed by layers with the different angles, and resin, which uniformly distributes the structural stresses between fibers. These composites should have an orthotropic symmetry of mechanical properties to obtain the desirable spatial distribution of elastic moduli consistent to the external loading pattern. Main requirements to the aircraft composite materials are the specified elastic properties (9 for orthotropic composite), long-term strength parameters, high resistance against the environmental influences, low thermal expansion to maintain the shape stability. These properties are ensured by an exact implementation of technological conditions and many testing procedures performed with the fibers, resin, prepregs and ready components. Most important mechanical testing procedures are defined by ASTM, SACMA and other standards. However in each case the wide diversity of components (dimensions and lay-up of fibers, rheological properties of thermosetting resins) requires a specific approach to the sample preparation, testing, and numerical processing of the testing results to obtain the veritable values of tested parameters. We pay the special attention to the cases where the tested specimens are cut not from the plates recommended by standards, but from the ready part manufactured with the specific lay-up, tension forces on the reinforcing fiber at the filament winding, and curing schedule. These tests can provide most useful information both for the composite structural design and to estimate a quality of the ready parts. We consider an influence of relation between specimen dimensions and pattern of the fibers winding (or lay-up) on the results of mechanical testing for determination of longitudinal, transverse and in-plane shear moduli, an original numerical scheme for reconstruction of in-plane shear modulus measured by the modified Iosipescu method which use finite element based numerical processing and indicative data preliminary obtained by the short-beam test. The sensitivity and ability to decoupling the values of in-plane and interlaminar shear moduli obtained by the sample twisting test is studied and discussed.
Peng, Jing; Zhou, Yong; Min, Li; Zhang, Wenli; Luo, Yi; Zhang, Xuelei; Zou, Chang; Shi, Rui; Tu, Chongqi
2014-05-01
To analyze the correlation between the trabecular microstructure and the clinical imaging parameters in the fracture region of osteoporotic hip so as to provide a simple method to evaluate the trabecular microstructure by a non-invasive way. Between June 2012 and January 2013, 16 elderly patients with femoral neck fracture underwent hip arthroplasty were selected as the trial group; 5 young patients with pelvic fracture were selected as the control group. The hip CT examination was done, and cancellous bone volume/marrow cavity volume (CV/MV) was analyzed with Mimics 10.01 software in the control group. The CT scan and bone mineral density (BMD) measurement were performed on normal hips of the trial group, and cuboid specimens were gained from the femoral necks at the place of the tensional trabeculae to evaluate the trabecular microstructure parameters by Micro-CT, including bone volume fraction (BV/TV), trabecular number (Tb.N), trabecular spacing (Tb.Sp), trabecular thickness (Tb.Th), connect density (Conn.D), and structure model index (SMI). The correlation between imaging parameters and microstructure parameters was analyzed. In the trial group, the BMD value was 0.491-0.698 g/cm2 (mean, 0.601 g/cm2); according to World Health Organization (WHO) standard, 10 cases were diagnosed as having osteoporosis, and 6 cases as having osteopenia. The CV/MV of the trial group (0.670 1 +/- 0.102 0) was significantly lower than that of the control group (0.885 0 +/- 0.089 1) (t = -4.567, P = 0.000). In the trial group, CV/MV had correlation with BV/TV, Tb.Th, and SMI (P < 0.05); however, CV/MV had no correlation with Tb.N, Tb.Sp, or Conn.D (P > 0.05). BV/TV had correlation with Tb.Th, Tb.N, Tb.Sp, and SMI (P < 0.05), but it had no correlation with Conn.D (P=0.075). There was no correlation between BMD and microstructure parameters (P > 0.05). CV/MV obviously decreases in the osteoporotic hip, and there is a correlation between CV/MV and the microstructure parameters of BV/TV, Tb.Th, and SMI, to some extent, which can reflect the variety of the microstructure of the trabeculae. There is no correlation between BMD of femoral neck and microstructure parameters.
40 CFR 60.2730 - What monitoring equipment must I install and what parameters must I monitor?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Monitoring § 60.2730 What monitoring equipment must I install and what parameters must...) of this section must be expressed in milligrams per dry standard cubic meter corrected to 7 percent...
40 CFR 60.2730 - What monitoring equipment must I install and what parameters must I monitor?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Monitoring § 60.2730 What monitoring equipment must I install and what parameters must...) of this section must be expressed in milligrams per dry standard cubic meter corrected to 7 percent...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Research on the calibration methods of the luminance parameter of radiation luminance meters
NASA Astrophysics Data System (ADS)
Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei
2017-10-01
This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.
NASA Astrophysics Data System (ADS)
Massmann, Joel; Freeze, R. Allan
1987-02-01
The risk-cost-benefit analysis developed in the companion paper (J. Massmann and R. A. Freeze, this issue) is here applied to (1) an assessment of the relative worth of containment-construction activities, site-exploration activities, and monitoring activities as components of a design strategy for the owner/operator of a waste management facility; (2) an assessment of alternative policy options available to a regulatory agency; and (3) a case history. Sensitivity analyses designed to address the first issue show that the allocation of resources by the owner/operator is sensitive to the stochastic parameters used to describe the hydraulic conductivity field at a site. For the cases analyzed, the installation of a dense monitoring network is of less value to the owner/operator than a more conservative containment design. Sensitivity analyses designed to address the second issue suggest that from a regulatory perspective, design standards should be more effective than performance standards in reducing risk, and design specifications on the containment structure should be more effective than those on the monitoring network. Performance bonds posted before construction have a greater potential to influence design than prospective penalties to be imposed at the time of failure. Siting on low-conductivity deposits is a more effective method of risk reduction than any form of regulatory influence. Results of the case history indicate that the methodology can be successfully applied at field sites.
10 CFR 430.41 - Prescriptions of a rule.
Code of Federal Regulations, 2014 CFR
2014-01-01
... prescribed an energy conservation standard, water conservation standard (in the case of faucets, showerheads... Federal energy conservation standard or water conservation standard is applicable, the Secretary shall... water conservation standard (in the case of faucets, showerheads, water closets, and urinals) or other...
Revert Ventura, A J; Sanz Requena, R; Martí-Bonmatí, L; Pallardó, Y; Jornet, J; Gaspar, C
2014-01-01
To study whether the histograms of quantitative parameters of perfusion in MRI obtained from tumor volume and peritumor volume make it possible to grade astrocytomas in vivo. We included 61 patients with histological diagnoses of grade II, III, or IV astrocytomas who underwent T2*-weighted perfusion MRI after intravenous contrast agent injection. We manually selected the tumor volume and peritumor volume and quantified the following perfusion parameters on a voxel-by-voxel basis: blood volume (BV), blood flow (BF), mean transit time (TTM), transfer constant (K(trans)), washout coefficient, interstitial volume, and vascular volume. For each volume, we obtained the corresponding histogram with its mean, standard deviation, and kurtosis (using the standard deviation and kurtosis as measures of heterogeneity) and we compared the differences in each parameter between different grades of tumor. We also calculated the mean and standard deviation of the highest 10% of values. Finally, we performed a multiparametric discriminant analysis to improve the classification. For tumor volume, we found statistically significant differences among the three grades of tumor for the means and standard deviations of BV, BF, and K(trans), both for the entire distribution and for the highest 10% of values. For the peritumor volume, we found no significant differences for any parameters. The discriminant analysis improved the classification slightly. The quantification of the volume parameters of the entire region of the tumor with BV, BF, and K(trans) is useful for grading astrocytomas. The heterogeneity represented by the standard deviation of BF is the most reliable diagnostic parameter for distinguishing between low grade and high grade lesions. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Kitayama, Kyo; Ohse, Kenji; Shima, Nagayoshi; Kawatsu, Kencho; Tsukada, Hirofumi
2016-11-01
The decreasing trend of the atmospheric 137 Cs concentration in two cities in Fukushima prefecture was analyzed by a regression model to clarify the relation between the parameter of the decrease in the model and the trend and to compare the trend with that after the Chernobyl accident. The 137 Cs particle concentration measurements were conducted in urban Fukushima and rural Date sites from September 2012 to June 2015. The 137 Cs particle concentrations were separated in two groups: particles of more than 1.1 μm aerodynamic diameters (coarse particles) and particles with aerodynamic diameter lower than 1.1 μm (fine particles). The averages of the measured concentrations were 0.1 mBq m -3 in Fukushima and Date sites. The measured concentrations were applied in the regression model which decomposed them into two components: trend and seasonal variation. The trend concentration included the parameters for the constant and the exponential decrease. The parameter for the constant was slightly different between the Fukushima and Date sites. The parameter for the exponential decrease was similar for all the cases, and much higher than the value of the physical radioactive decay except for the concentration in the fine particles at the Date site. The annual decreasing rates of the 137 Cs concentration evaluated by the trend concentration ranged from 44 to 53% y -1 with average and standard deviation of 49 ± 8% y -1 for all the cases in 2013. In the other years, the decreasing rates also varied slightly for all cases. These indicated that the decreasing trend of the 137 Cs concentration was nearly unchanged for the location and ground contamination level in the three years after the accident. The 137 Cs activity per aerosol particle mass also decreased with the same trend as the 137 Cs concentration in the atmosphere. The results indicated that the decreasing trend of the atmospheric 137 Cs concentration was related with the reduction of the 137 Cs concentration in resuspended particles. Copyright © 2016 Elsevier Ltd. All rights reserved.
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
A COMPARISON OF STELLAR ELEMENTAL ABUNDANCE TECHNIQUES AND MEASUREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinkel, Natalie R.; Young, Patrick A.; Pagano, Michael D.
2016-09-01
Stellar elemental abundances are important for understanding the fundamental properties of a star or stellar group, such as age and evolutionary history, as well as the composition of an orbiting planet. However, as abundance measurement techniques have progressed, there has been little standardization between individual methods and their comparisons. As a result, different stellar abundance procedures determine measurements that vary beyond the quoted error for the same elements within the same stars. The purpose of this paper is to better understand the systematic variations between methods and offer recommendations for producing more accurate results in the future. We invited amore » number of participants from around the world (Australia, Portugal, Sweden, Switzerland, and the United States) to calculate 10 element abundances (C, O, Na, Mg, Al, Si, Fe, Ni, Ba, and Eu) using the same stellar spectra for four stars (HD 361, HD 10700, HD 121504, and HD 202206). Each group produced measurements for each star using (1) their own autonomous techniques, (2) standardized stellar parameters, (3) a standardized line list, and (4) both standardized parameters and a line list. We present the resulting stellar parameters, absolute abundances, and a metric of data similarity that quantifies the homogeneity of the data. We conclude that standardization of some kind, particularly stellar parameters, improves the consistency between methods. However, because results did not converge as more free parameters were standardized, it is clear there are inherent issues within the techniques that need to be reconciled. Therefore, we encourage more conversation and transparency within the community such that stellar abundance determinations can be reproducible as well as accurate and precise.« less
Robust stochastic resonance: Signal detection and adaptation in impulsive noise
NASA Astrophysics Data System (ADS)
Kosko, Bart; Mitaim, Sanya
2001-11-01
Stochastic resonance (SR) occurs when noise improves a system performance measure such as a spectral signal-to-noise ratio or a cross-correlation measure. All SR studies have assumed that the forcing noise has finite variance. Most have further assumed that the noise is Gaussian. We show that SR still occurs for the more general case of impulsive or infinite-variance noise. The SR effect fades as the noise grows more impulsive. We study this fading effect on the family of symmetric α-stable bell curves that includes the Gaussian bell curve as a special case. These bell curves have thicker tails as the parameter α falls from 2 (the Gaussian case) to 1 (the Cauchy case) to even lower values. Thicker tails create more frequent and more violent noise impulses. The main feedback and feedforward models in the SR literature show this fading SR effect for periodic forcing signals when we plot either the signal-to-noise ratio or a signal correlation measure against the dispersion of the α-stable noise. Linear regression shows that an exponential law γopt(α)=cAα describes this relation between the impulsive index α and the SR-optimal noise dispersion γopt. The results show that SR is robust against noise ``outliers.'' So SR may be more widespread in nature than previously believed. Such robustness also favors the use of SR in engineering systems. We further show that an adaptive system can learn the optimal noise dispersion for two standard SR models (the quartic bistable model and the FitzHugh-Nagumo neuron model) for the signal-to-noise ratio performance measure. This also favors practical applications of SR and suggests that evolution may have tuned the noise-sensitive parameters of biological systems.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Formosa, Melissa M; Xuereb-Anastasi, Angela
2016-01-01
Osteoporosis and fractures are complex conditions influenced by an interplay of genetic and environmental factors. The aim of the study was to investigate three biochemical parameters including total serum calcium, total serum alkaline phosphatase (sALP) and albumin in relation to bone mineral density (BMD) at the lumbar spine and femoral neck (FN), and with all-type of low-trauma fractures in Maltese postmenopausal women. Levels were also correlated with age and physical activity. A case-control study of 1045 women was performed. Women who suffered a fracture were classified as cases whereas women without a fracture history were included as controls subdivided into normal, osteopenic, or osteoporotic according to their BMD measurements. Blood specimens were collected following good standard practice and testing was performed by spectrophotometry. Calcium and sALP levels were weakly correlated with FN BMD levels (calcium: r = -0.111, p = 0.002; sALP: r = 0.089, p = 0.013). Fracture cases had the lowest serum levels of calcium, sALP and albumin relative to all other control groups, which decreased with increasing age, possibly increasing fracture risk. Biochemical levels were lowest in women who sustained a hip fracture and more than one fracture. Biochemical parameters decreased with reduced physical activity; however, this was most evident for fracture cases. Reduced physical activity was associated with lower BMD levels at the hip, and to a lower extent at the spine. In conclusion, results suggest that levels of serum calcium and albumin could be indicative of fracture risk, whereas calcium levels and to lower extent sALP levels could be indicators of hip BMD.
Analysis of household refrigerators for different testing standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bansal, P.K.; McGill, I.
This study highlights the salient differences among various testing standards for household refrigerator-freezers and proposes a methodology for predicting the performance of a single evaporator-based vapor-compression refrigeration system (either refrigerator or freezer) from one test standard (where the test data are available-the reference case) to another (the alternative case). The standards studied during this investigation include the Australian-New Zealand Standard (ANZS), the International Standard (ISO), the American National Standard (ANSI), the Japanese Industrial Standard (JIS), and the Chinese National Standard (CNS). A simple analysis in conjunction with the BICYCLE model (Bansal and Rice 1993) is used to calculate the energymore » consumption of two refrigerator cabinets from the reference case to the alternative cases. The proposed analysis includes the effect of door openings (as required by the JIS) as well as defrost heaters. The analytical results are found to agree reasonably well with the experimental observations for translating energy consumption information from one standard to another.« less
Definition of periprosthetic joint infection: is there a consensus?
Parvizi, Javad; Jacovides, Christina; Zmistowski, Benjamin; Jung, Kwang Am
2011-11-01
The diagnosis of periprosthetic joint infection (PJI) continues to pose a challenge. While many diagnostic criteria have been proposed, a gold standard for diagnosis is lacking. Use of multiple diagnostic criteria within the joint arthroplasty community raises concerns in patient treatment and comparison of research pertaining to PJI. We (1) determined the variation in existing diagnostic criteria, (2) compared the existing criteria to a proposed new set of criteria that incorporates aspirate cell count analysis, and (3) investigated the variations between the existing criteria and the proposed criteria. We retrospectively identified 182 patients undergoing 192 revision knee arthroplasties who had a preoperative joint aspiration analysis at our institution between April 2002 and November 2009. We excluded 20 cases due to insufficient laboratory parameters, leaving 172 cases for analysis. We applied six previously published sets of diagnostic criteria for PJI to determine the variation in its incidence using each set of criteria. We then compared these diagnostic criteria to our proposed new criteria and investigated cases where disagreement occurred. We identified 41 cases (24%) in which at least one established criteria set classified the case as infected while at least one other criteria set classified the case as uninfected. With our proposed criteria, the infected/uninfected ratio was 92/80. The proposed criteria had a large variance in sensitivity (54%-100%), specificity (39%-100%), and accuracy (53%-100%) when using each of the established criteria sets as the reference standard. The discrepancy between definitions of infection complicates interpretation of the literature and the treatment of failed TKAs owing to PJI. Based on our findings, we suggest establishing a common set of diagnostic criteria utilizing aspirate analysis to improve the treatment of PJI and facilitate interpretation of the literature. Level III, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.
Drabant, S; Klebovich, I; Gachályi, B; Renczes, G; Farsang, C
1998-09-01
Due to several mechanism, meals may modify the pharmacokinetics of drug products, thereby eliciting to clinically significant food interaction. Food interactions with the drug substance and with the drug formulation should be distinguished. Food interaction of different drug products containing the same active ingredient can be various depending on the pharmaceutical formulation technology. Particularly, in the case of modified release products, the food/formulation interaction can play an important role in the development of food interaction. Well known example, that bioavailability of theophylline can be influenced in different way (either increased, decreased or unchanged) by concomitant intake of food in the case of different sustained release products. The role and methods of food interaction studies in the different kinds of drug development (new chemical entity, modified release products, generics) are reviewed. Prediction of food effect response on the basis of the physicochemical and pharmacokinetic characteristics of the drug molecule or formulations is discussed. The results of three food interaction studies carried out the products of EGIS Pharmaceuticals Ltd. are also reviewed. The pharmacokinetic parameters of theophyllin 400 mg retard tablet were practically the same in both fasting condition and administration after consumption of a high fat containing standard breakfast. The ingestion of a high fat containing breakfast, increased the AUC of nifedipine from 259.0 +/- 101.2 ng h/ml to 326.7 +/- 122.5 ng h/ml and Cmax from 34.5 +/- 15.9 ng/ml to 74.3 +/- 23.9 ng/ml in case of nifedipine 20 mg retard tablet, in agreement with the data of literature. The statistical evaluation indicated significant differences between the pharmacokinetic parameters in the case of two administrations (before and after meal). The effect of a high fat containing breakfast for a generic version of buspiron 10 mg tablet and the bioequivalence after food consumption were studied in a single-dose, three-way (test and reference products administered after consumption of standard breakfast, as well as test product in fasting condition), cross-over, food effect bioequivalence study. According to the results, the test product--which, in a former study proved to be bioequivalent with the reference product in fasting state--is bioequivalent with the reference product under feeding conditions and the food intake influenced the pharmacokinetics of the test tablets.
Population viability analysis with species occurrence data from museum collections.
Skarpaas, Olav; Stabbetorp, Odd E
2011-06-01
The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.
Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André
2011-01-01
Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.
Postimplant dosimetry using a Monte Carlo dose calculation engine: a new clinical standard.
Carrier, Jean-François; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, André-Guy; Vigneault, Eric; Beaulieu, Luc
2007-07-15
To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. For the clinical target volume (CTV) D(90) parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.
Standard Methods for Bolt-Bearing Testing of Textile Composites
NASA Technical Reports Server (NTRS)
Portanova, M. A.; Masters, J. E.
1995-01-01
The response of three 2-D braided materials to bolt bearing loading was evaluated using data generated by Boeing Defense and Space Group in Philadelphia, PA. Three test methods, stabilized single shear, unstabilized single shear, and double shear, were compared. In general, these textile composites were found to be sensitive to bolt bearing test methods. The stabilized single shear method yielded higher strengths than the unstabilized single shear method in all cases. The double shear test method always produced the highest strengths but these results may be somewhat misleading. It is therefore recommended that standard material comparisons be made using the stabilized single shear test method. The effects of two geometric parameters, W/D and e/D, were also studied. An evaluation of the effect of the specimen width (W) to hole diameter (D) ratio concluded that bolt bearing responses were consistent with open hole tension results. A W/D ratio of 6 or greater should be maintained. The proximity of the hole to the specimen edge significantly affected strength. In all cases, strength was improved by increasing the ratio of the distance from the hole center to the specimen edge (e) to the hole diameter (D) above 2. An e/D ratio of 3 or greater is recommended.
Single Higgs-boson production at a photon-photon collider: General 2HDM versus MSSM
NASA Astrophysics Data System (ADS)
López-Val, David; Solà, Joan
2011-08-01
We revisit the production of a single Higgs boson from direct γγ-scattering at a photon collider. We compute the total cross-section σ(γγ→h) (for h=h,H,A), and the strength of the effective ghγγ coupling normalized to the Standard Model (SM), for both the general Two-Higgs-Doublet Model (2HDM) and the Minimal Supersymmetric Standard Model (MSSM). In both cases the predicted production rates for the CP-even (odd) states render up to 104 (103) events per 500 fb-1 of integrated luminosity, in full consistency with all the theoretical and phenomenological constraints. Depending on the channel the maximum rates can be larger or smaller than the SM expectations, but in most of the parameter space they should be well measurable. We analyze how these departures depend on the dynamics underlying each of the models, supersymmetric and non-supersymmetric, and highlight the possible distinctive phenomenological signatures. We demonstrate that this process could be extremely useful to discern non-supersymmetric Higgs bosons from supersymmetric ones. Furthermore, in the MSSM case, we show that γγ-physics could decisively help to overcome the serious impasse afflicting Higgs boson physics at the infamous “LHC wedge”.
Aziz, Mina Sr; Tsuji, Matthew Rs; Nicayenzi, Bruce; Crookshank, Meghan C; Bougherara, Habiba; Schemitsch, Emil H; Zdero, Radovan
2014-05-01
During orthopedic surgery, screws are inserted by "subjective feel" in humeri for fracture fixation, that is, stopping torque, while trying to prevent accidental over-tightening that causes screw-bone interface failure, that is, stripping torque. However, no studies exist on stopping torque, stripping torque, or stopping/stripping torque ratio in human or artificial humeri. This study evaluated five types of humeri, namely, human fresh-frozen (n = 19), human embalmed (n = 18), human dried (n = 15), artificial "normal" (n = 13), and artificial "osteoporotic" (n = 13). An orthopedic surgeon used a torque screwdriver to insert 3.5-mm-diameter cortical screws into humeral shafts and 6.5-mm-diameter cancellous screws into humeral heads by "subjective feel" to obtain stopping and stripping torques. The five outcome measures were raw and normalized stopping torque, raw and normalized stripping torque, and stopping/stripping torque ratio. Normalization was done as raw torque/screw-bone interface area. For "gold standard" fresh-frozen humeri, cortical screw tests yielded averages of 1312 N mm (raw stopping torque), 30.4 N/mm (normalized stopping torque), 1721 N mm (raw stripping torque), 39.0 N/mm (normalized stripping torque), and 82% (stopping/stripping torque ratio). Similarly, fresh-frozen humeri gave cancellous screw average results of 307 N mm (raw stopping torque), 0.9 N/mm (normalized stopping torque), 392 N mm (raw stripping torque), 1.2 N/mm (normalized stripping torque), and 79% (stopping/stripping torque ratio). Of the five cortical screw parameters for fresh-frozen humeri versus other groups, statistical equivalence (p ≥ 0.05) occurred in four cases (embalmed), three cases (dried), four cases (artificial "normal"), and four cases (artificial "osteoporotic"). Of the five cancellous screw parameters for fresh-frozen humeri versus other groups, statistical equivalence (p ≥ 0.05) occurred in five cases (embalmed), one case (dried), one case (artificial "normal"), and zero cases (artificial "osteoporotic"). Stopping/stripping torque ratios were relatively constant for all groups at 77%-88% (cortical screws) and 79%-92% (cancellous screws). © IMechE 2014.
Devlin, Phillip M; Gaspar, Laurie E; Buzurovic, Ivan; Demanes, D Jeffrey; Kasper, Michael E; Nag, Subir; Ouhib, Zoubir; Petit, Joshua H; Rosenthal, Seth A; Small, William; Wallner, Paul E; Hartford, Alan C
This collaborative practice parameter technical standard has been created between the American College of Radiology and American Brachytherapy Society to guide the usage of electronically generated low energy radiation sources (ELSs). It refers to the use of electronic X-ray sources with peak voltages up to 120 kVp to deliver therapeutic radiation therapy. The parameter provides a guideline for utilizing ELS, including patient selection and consent, treatment planning, and delivery processes. The parameter reviews the published clinical data with regard to ELS results in skin, breast, and other cancers. This technical standard recommends appropriate qualifications of the involved personnel. The parameter reviews the technical issues relating to equipment specifications as well as patient and personnel safety. Regarding suggestions for educational programs with regard to this parameter,it is suggested that the training level for clinicians be equivalent to that for other radiation therapies. It also suggests that ELS must be done using the same standards of quality and safety as those in place for other forms of radiation therapy. Copyright © 2017 American Brachytherapy Society and American College of Radiology. Published by Elsevier Inc. All rights reserved.
Path loss variation of on-body UWB channel in the frequency bands of IEEE 802.15.6 standard.
Goswami, Dayananda; Sarma, Kanak C; Mahanta, Anil
2016-06-01
The wireless body area network (WBAN) has gaining tremendous attention among researchers and academicians for its envisioned applications in healthcare service. Ultra wideband (UWB) radio technology is considered as excellent air interface for communication among body area network devices. Characterisation and modelling of channel parameters are utmost prerequisite for the development of reliable communication system. The path loss of on-body UWB channel for each frequency band defined in IEEE 802.15.6 standard is experimentally determined. The parameters of path loss model are statistically determined by analysing measurement data. Both the line-of-sight and non-line-of-sight channel conditions are considered in the measurement. Variations of parameter values with the size of human body are analysed along with the variation of parameter values with the surrounding environments. It is observed that the parameters of the path loss model vary with the frequency band as well as with the body size and surrounding environment. The derived parameter values are specific to the particular frequency bands of IEEE 802.15.6 standard, which will be useful for the development of efficient UWB WBAN system.
Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano
2017-09-21
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.
Somovilla Gómez, Fátima
2017-01-01
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161
Taxanes: vesicants, irritants, or just irritating?
Barbee, Meagan S; Owonikoko, Taofeek K; Harvey, R Donald
2014-01-01
Several classes of antineoplastic agents are universally referred to as vesicants with ample supporting literature. However, the literature surrounding the taxanes is controversial. While the American Society of Clinical Oncology and Oncology Nursing Society Chemotherapy Administration Safety Standards and the Chemotherapy and Biotherapy Guidelines and Recommendations for Practice identify the risks of extravasation and the parameters surrounding the infusion of known vesicants, recommend administration sites for known agents, and recommend antidotes for particular extravasation cases, they fail to provide specific recommendations for the administration of individual taxanes, or a classification system for antineoplastic agents as vesicants, irritants, or inert compounds. There is also a lack of prescribing information regarding such recommendations. The lack of a formal classification system further complicates the accurate delineation of vesicant antineoplastic agents and subsequent appropriate intravenous administration and extravasation management. There are several factors that make the classification of taxanes as vesicants or irritants challenging. Comprehensive preclinical data describing potential mechanisms of tissue damage or vesicant-like properties are lacking. Furthermore, most case reports of taxane extravasation fail to include the parameters surrounding administration, such as the concentration of medication and duration of infusion, making it difficult to set parameters for vesicant potential. Subsequently, many practitioners default to central venous administration of taxanes without evidence that such administration minimizes the risk of extravasation or improves outcomes thereof. Here, we review briefly the data surrounding taxane extravasation and potential vesicant or irritant properties, classify the taxanes, and propose a spectrum for antineoplastic agent potential to cause tissue injury that warrants clinical intervention if extravasation occurs.
GLobal Integrated Design Environment
NASA Technical Reports Server (NTRS)
Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.
2011-01-01
The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.
Spectral Rate Theory for Two-State Kinetics
NASA Astrophysics Data System (ADS)
Prinz, Jan-Hendrik; Chodera, John D.; Noé, Frank
2014-02-01
Classical rate theories often fail in cases where the observable(s) or order parameter(s) used is a poor reaction coordinate or the observed signal is deteriorated by noise, such that no clear separation between reactants and products is possible. Here, we present a general spectral two-state rate theory for ergodic dynamical systems in thermal equilibrium that explicitly takes into account how the system is observed. The theory allows the systematic estimation errors made by standard rate theories to be understood and quantified. We also elucidate the connection of spectral rate theory with the popular Markov state modeling approach for molecular simulation studies. An optimal rate estimator is formulated that gives robust and unbiased results even for poor reaction coordinates and can be applied to both computer simulations and single-molecule experiments. No definition of a dividing surface is required. Another result of the theory is a model-free definition of the reaction coordinate quality. The reaction coordinate quality can be bounded from below by the directly computable observation quality, thus providing a measure allowing the reaction coordinate quality to be optimized by tuning the experimental setup. Additionally, the respective partial probability distributions can be obtained for the reactant and product states along the observed order parameter, even when these strongly overlap. The effects of both filtering (averaging) and uncorrelated noise are also examined. The approach is demonstrated on numerical examples and experimental single-molecule force-probe data of the p5ab RNA hairpin and the apo-myoglobin protein at low pH, focusing here on the case of two-state kinetics.
Whitmyre, Gary K; Pandian, Muhilan D
2018-06-01
Use of vent-free gas heating appliances for supplemental heating in U.S. homes is increasing. However, there is currently a lack of information on the potential impact of these appliances on indoor air quality for homes constructed according to energy-efficient and green building standards. A probabilistic analysis was conducted to estimate the impact of vent-free gas heating appliances on indoor air concentrations of carbon monoxide (CO), nitrogen dioxide (NO 2 ), carbon dioxide (CO 2 ), water vapor, and oxygen in "tight" energy-efficient homes in the United States. A total of 20,000 simulations were conducted for each Department of Energy (DOE) heating region to capture a wide range of home sizes, appliance features, and conditions, by varying a number of parameters, e.g., room volume, house volume, outdoor humidity, air exchange rates, appliance input rates (Btu/hr), and house heat loss factors. Predicted airborne levels of CO were below the U.S. Environmental Protection Agency (EPA) standard of 9 ppm for all modeled cases. The airborne concentrations of NO 2 were below the U.S. Consumer Product Safety Commission (CPSC) guideline of 0.3 ppm and the Health Canada benchmark of 0.25 ppm in all cases and were below the World Health Organization (WHO) standard of 0.11 ppm in 99-100% of all cases. Predicted levels of CO 2 were below the Health Canada standard of 3500 ppm for all simulated cases. Oxygen levels in the room of vent-free heating appliance use were not significantly reduced. The great majority of cases in all DOE regions were associated with relative humidity (RH) levels from all indoor water vapor sources that were less than the EPA-recommended 70% RH maximum to avoid active mold and mildew growth. The conclusion of this investigation is that when installed in accordance with the manufacturer's instructions, vent-free gas heating appliances maintain acceptable indoor air quality in tight energy-efficient homes, as defined by the standards referenced in this report. Probabilistic modeling of indoor air concentrations of carbon monoxide (CO), nitrogen dioxide (NO 2 ), carbon dioxide (CO 2 ), water vapor, and oxygen associated with use of vent-free gas heating appliances provides new data indicating that uses of these devices are consistent with acceptable indoor air quality in "tight" energy-efficient homes in the United States. This study will provide authoritative bodies such as the International Code Council with definitive information that will assist in the development of future versions of national building codes, and will provide evaluation of the performance of unvented gas heating products in energy conservation homes.
Aerodynamic Analysis Over Double Wedge Airfoil
NASA Astrophysics Data System (ADS)
Prasad, U. S.; Ajay, V. S.; Rajat, R. H.; Samanyu, S.
2017-05-01
Aeronautical studies are being focused more towards supersonic flights and methods to attain a better and safer flight with highest possible performance. Aerodynamic analysis is part of the whole procedure, which includes focusing on airfoil shapes which will permit sustained flight of aircraft at these speeds. Airfoil shapes differ based on the applications, hence the airfoil shapes considered for supersonic speeds are different from the ones considered for Subsonic. The present work is based on the effects of change in physical parameter for the Double wedge airfoil. Mach number range taken is for transonic and supersonic. Physical parameters considered for the Double wedge case with wedge angle (ranging from 5 degree to 15 degree. Available Computational tools are utilized for analysis. Double wedge airfoil is analysed at different Angles of attack (AOA) based on the wedge angle. Analysis is carried out using fluent at standard conditions with specific heat ratio taken as 1.4. Manual calculations for oblique shock properties are calculated with the help of Microsoft excel. MATLAB is used to form a code for obtaining shock angle with Mach number and wedge angle at the given parameters. Results obtained from manual calculations and fluent analysis are cross checked.
Windowed Multitaper Correlation Analysis of Multimodal Brain Monitoring Parameters
Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome. PMID:25821507
Assessment of groundwater quality in a typical rural settlement in southwest Nigeria.
Adekunle, I M; Adetunji, M T; Gbadebo, A M; Banjoko, O P
2007-12-01
In most rural settlements in Nigeria, access to clean and potable water is a great challenge, resulting in water borne diseases. The aim of this study was to assess the levels of some physical, chemical, biochemical and microbial water quality parameters in twelve hand - dug wells in a typical rural area (Igbora) of southwest region of the country. Seasonal variations and proximity to pollution sources (municipal waste dumps and defecation sites) were also examined. Parameters were determined using standard procedures. All parameters were detected up to 200 m from pollution source and most of them increased in concentration during the rainy season over the dry periods, pointing to infiltrations from storm water. Coliform population, Pb, NO3- and Cd in most cases, exceeded the World Health Organization recommended thresholds for potable water. Effect of distance from pollution sources was more pronounced on fecal and total coliform counts, which decreased with increasing distance from waste dumps. The qualities of the well water samples were therefore not suitable for human consumption without adequate treatment. Regular monitoring of groundwater quality, abolishment of unhealthy waste disposal practices and introduction of modern techniques are recommended.
NASA Astrophysics Data System (ADS)
Soeharwinto; Sinulingga, Emerson; Siregar, Baihaqi
2017-01-01
An accurate information can be useful for authorities to make good policies for preventive and mitigation after volcano eruption disaster. Monitoring of environmental parameters of post-eruption volcano provides an important information for authorities. Such monitoring system can be develop using the Wireless Network Sensor technology. Many application has been developed using the Wireless Sensor Network technology, such as floods early warning system, sun radiation mapping, and watershed monitoring. This paper describes the implementation of a remote environment monitoring system of mount Sinabung post-eruption. The system monitor three environmental parameters: soil condition, water quality and air quality (outdoor). Motes equipped with proper sensors, as components of the monitoring system placed in sample locations. The measured value from the sensors periodically sends to data server using 3G/GPRS communication module. The data can be downloaded by the user for further analysis.The measurement and data analysis results generally indicate that the environmental parameters in the range of normal/standard condition. The sample locations are safe for living and suitable for cultivation, but awareness is strictly required due to the uncertainty of Sinabung status.
Code of Federal Regulations, 2011 CFR
2011-01-01
... writing concerning the energy performance or water performance (in the case of faucets, showerheads, water... standard or water performance standard (in the case of faucets, showerheads, water closets, and urinals... standard (in the case of faucets, showerheads, water closets, and urinals) shall be based on the testing...
Gomez, Gabriela B; Foster, Nicola; Brals, Daniella; Nelissen, Heleen E; Bolarinwa, Oladimeji A; Hendriks, Marleen E; Boers, Alexander C; van Eck, Diederik; Rosendaal, Nicole; Adenusi, Peju; Agbede, Kayode; Akande, Tanimola M; Boele van Hensbroek, Michael; Wit, Ferdinand W; Hankins, Catherine A; Schultsz, Constance
2015-01-01
While the Nigerian government has made progress towards the Millennium Development Goals, further investments are needed to achieve the targets of post-2015 Sustainable Development Goals, including Universal Health Coverage. Economic evaluations of innovative interventions can help inform investment decisions in resource-constrained settings. We aim to assess the cost and cost-effectiveness of maternal care provided within the new Kwara State Health Insurance program (KSHI) in rural Nigeria. We used a decision analytic model to simulate a cohort of pregnant women. The primary outcome is the incremental cost effectiveness ratio (ICER) of the KSHI scenario compared to the current standard of care. Intervention cost from a healthcare provider perspective included service delivery costs and above-service level costs; these were evaluated in a participating hospital and using financial records from the managing organisations, respectively. Standard of care costs from a provider perspective were derived from the literature using an ingredient approach. We generated 95% credibility intervals around the primary outcome through probabilistic sensitivity analysis (PSA) based on a Monte Carlo simulation. We conducted one-way sensitivity analyses across key model parameters and assessed the sensitivity of our results to the performance of the base case separately through a scenario analysis. Finally, we assessed the sustainability and feasibility of this program's scale up within the State's healthcare financing structure through a budget impact analysis. The KSHI scenario results in a health benefit to patients at a higher cost compared to the base case. The mean ICER (US$46.4/disability-adjusted life year averted) is considered very cost-effective compared to a willingness-to-pay threshold of one gross domestic product per capita (Nigeria, US$ 2012, 2,730). Our conclusion was robust to uncertainty in parameters estimates (PSA: median US$49.1, 95% credible interval 21.9-152.3), during one-way sensitivity analyses, and when cost, quality, cost and utilization parameters of the base case scenario were changed. The sustainability of this program's scale up by the State is dependent on further investments in healthcare. This study provides evidence that the investment made by the KSHI program in rural Nigeria is likely to have been cost-effective; however, further healthcare investments are needed for this program to be successfully expanded within Kwara State. Policy makers should consider supporting financial initiatives to reduce maternal mortality tackling both supply and demand issues in the access to care.
Gomez, Gabriela B.; Foster, Nicola; Brals, Daniella; Nelissen, Heleen E.; Bolarinwa, Oladimeji A.; Hendriks, Marleen E.; Boers, Alexander C.; van Eck, Diederik; Rosendaal, Nicole; Adenusi, Peju; Agbede, Kayode; Akande, Tanimola M.; Boele van Hensbroek, Michael; Wit, Ferdinand W.; Hankins, Catherine A.; Schultsz, Constance
2015-01-01
Background While the Nigerian government has made progress towards the Millennium Development Goals, further investments are needed to achieve the targets of post-2015 Sustainable Development Goals, including Universal Health Coverage. Economic evaluations of innovative interventions can help inform investment decisions in resource-constrained settings. We aim to assess the cost and cost-effectiveness of maternal care provided within the new Kwara State Health Insurance program (KSHI) in rural Nigeria. Methods and Findings We used a decision analytic model to simulate a cohort of pregnant women. The primary outcome is the incremental cost effectiveness ratio (ICER) of the KSHI scenario compared to the current standard of care. Intervention cost from a healthcare provider perspective included service delivery costs and above-service level costs; these were evaluated in a participating hospital and using financial records from the managing organisations, respectively. Standard of care costs from a provider perspective were derived from the literature using an ingredient approach. We generated 95% credibility intervals around the primary outcome through probabilistic sensitivity analysis (PSA) based on a Monte Carlo simulation. We conducted one-way sensitivity analyses across key model parameters and assessed the sensitivity of our results to the performance of the base case separately through a scenario analysis. Finally, we assessed the sustainability and feasibility of this program’s scale up within the State’s healthcare financing structure through a budget impact analysis. The KSHI scenario results in a health benefit to patients at a higher cost compared to the base case. The mean ICER (US$46.4/disability-adjusted life year averted) is considered very cost-effective compared to a willingness-to-pay threshold of one gross domestic product per capita (Nigeria, US$ 2012, 2,730). Our conclusion was robust to uncertainty in parameters estimates (PSA: median US$49.1, 95% credible interval 21.9–152.3), during one-way sensitivity analyses, and when cost, quality, cost and utilization parameters of the base case scenario were changed. The sustainability of this program’s scale up by the State is dependent on further investments in healthcare. Conclusions This study provides evidence that the investment made by the KSHI program in rural Nigeria is likely to have been cost-effective; however, further healthcare investments are needed for this program to be successfully expanded within Kwara State. Policy makers should consider supporting financial initiatives to reduce maternal mortality tackling both supply and demand issues in the access to care. PMID:26413788
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Critical levels as applied to ozone for North American forests
Robert C. Musselman
2006-01-01
The United States and Canada have used concentration-based parameters for air quality standards for ozone effects on forests in North America. The European critical levels method for air quality standards uses an exposure-based parameter, a cumulative ozone concentration index with a threshold cutoff value. The critical levels method has not been used in North America...
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)−0.347 RVP kPa = (0.956 * X)−2.39 (d) Distillation. Distillation parameters must be determined using...)(3) of this section. (2) Beginning January 1, 2004, the sulfur content of butane must be determined... paragraph (a)(2) of this section: (i) ASTM standard method D 4468-85 (Reapproved 2000), “Standard Test...
An analytic formula for the supercluster mass function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr
2014-03-01
We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less