Noninformative prior in the quantum statistical model of pure states
NASA Astrophysics Data System (ADS)
Tanaka, Fuyuhiko
2012-06-01
In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.
A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring.
Takahashi, Kunihiko; Kulldorff, Martin; Tango, Toshiro; Yih, Katherine
2008-04-11
Early detection of disease outbreaks enables public health officials to implement disease control and prevention measures at the earliest possible time. A time periodic geographical disease surveillance system based on a cylindrical space-time scan statistic has been used extensively for disease surveillance along with the SaTScan software. In the purely spatial setting, many different methods have been proposed to detect spatial disease clusters. In particular, some spatial scan statistics are aimed at detecting irregularly shaped clusters which may not be detected by the circular spatial scan statistic. Based on the flexible purely spatial scan statistic, we propose a flexibly shaped space-time scan statistic for early detection of disease outbreaks. The performance of the proposed space-time scan statistic is compared with that of the cylindrical scan statistic using benchmark data. In order to compare their performances, we have developed a space-time power distribution by extending the purely spatial bivariate power distribution. Daily syndromic surveillance data in Massachusetts, USA, are used to illustrate the proposed test statistic. The flexible space-time scan statistic is well suited for detecting and monitoring disease outbreaks in irregularly shaped areas.
A critical look at prospective surveillance using a scan statistic.
Correa, Thais R; Assunção, Renato M; Costa, Marcelo A
2015-03-30
The scan statistic is a very popular surveillance technique for purely spatial, purely temporal, and spatial-temporal disease data. It was extended to the prospective surveillance case, and it has been applied quite extensively in this situation. When the usual signal rules, as those implemented in SaTScan(TM) (Boston, MA, USA) software, are used, we show that the scan statistic method is not appropriate for the prospective case. The reason is that it does not adjust properly for the sequential and repeated tests carried out during the surveillance. We demonstrate that the nominal significance level α is not meaningful and there is no relationship between α and the recurrence interval or the average run length (ARL). In some cases, the ARL may be equal to ∞, which makes the method ineffective. This lack of control of the type-I error probability and of the ARL leads us to strongly oppose the use of the scan statistic with the usual signal rules in the prospective context. Copyright © 2014 John Wiley & Sons, Ltd.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
The possible modifications of the HISSE model for pure LANDSAT agricultural data
NASA Technical Reports Server (NTRS)
Peters, C.
1981-01-01
A method for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field) is discussed. Theoretical arguments are given which show that any significant refinement of the model beyond this proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Gaussian with a covariance function which exhibits exponential decay with respect to spatial separation.
A hybrid sensing approach for pure and adulterated honey classification.
Subari, Norazian; Mohamad Saleh, Junita; Md Shakaff, Ali Yeon; Zakaria, Ammar
2012-10-17
This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar) were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose) and Fourier Transform Infrared Spectroscopy (FTIR) were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0%) gave higher classification accuracy than e-nose data (76.5%) using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data.
Geng, Li; Qiao, Guang-yan; Gu, Kai-ka
2016-04-01
To investigate the effect of fluoride on electrochemical corrosion of the dental pure titanium before and after adhesion of Streptococcus mutans. The dental pure titanium specimens were tested by electrochemical measurement system including electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization curve (PD) methods in artificial saliva with 0 g/L and 1.0 g/L sodium fluoride before and after dipped into culture medium with Streptococcus mutans for 24 h. The corrosion parameters, including the polarization resistance (R(ct)), corrosion potential (E(corr)), pitting breakdown potential (E(b)), and the difference between E(corr) and E(b) representing the "pseudo-passivation" (ΔE) obtained from the electrochemical tests were used to evaluate the corrosion resistance of dental pure titanium. The data were statistically analyzed by 2×2 factorial statistical analysis to examine the effect of sodium fluoride and adhesion of Streptococcus mutans using SPSS 12.0 software package. The results showed that the corrosion parameters including R(ct), Ecorr, E(b), and ΔE of pure titanium had significant difference between before and after adhesion of Streptococcus mutans in the same solution(P<0.05), and in artificial saliva with 0 g/L and 1.0 g/L sodium fluoride(P<0.05). The dental pure titanium was prone to corrosion in artificial saliva with sodium fluoride. The corrosion resistance of pure titanium decreased distinctly after immersed in culture medium with Streptococcus mutans.
A Hybrid Sensing Approach for Pure and Adulterated Honey Classification
Subari, Norazian; Saleh, Junita Mohamad; Shakaff, Ali Yeon Md; Zakaria, Ammar
2012-01-01
This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar) were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose) and Fourier Transform Infrared Spectroscopy (FTIR) were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0%) gave higher classification accuracy than e-nose data (76.5%) using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data. PMID:23202033
Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J
2016-08-01
Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.
Consistency of extreme flood estimation approaches
NASA Astrophysics Data System (ADS)
Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf
2017-04-01
Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.
Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.
Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.; ...
2017-09-28
Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.
Brasil, Christiane Regina Soares; Delbem, Alexandre Claudio Botazzo; da Silva, Fernando Luís Barroso
2013-07-30
This article focuses on the development of an approach for ab initio protein structure prediction (PSP) without using any earlier knowledge from similar protein structures, as fragment-based statistics or inference of secondary structures. Such an approach is called purely ab initio prediction. The article shows that well-designed multiobjective evolutionary algorithms can predict relevant protein structures in a purely ab initio way. One challenge for purely ab initio PSP is the prediction of structures with β-sheets. To work with such proteins, this research has also developed procedures to efficiently estimate hydrogen bond and solvation contribution energies. Considering van der Waals, electrostatic, hydrogen bond, and solvation contribution energies, the PSP is a problem with four energetic terms to be minimized. Each interaction energy term can be considered an objective of an optimization method. Combinatorial problems with four objectives have been considered too complex for the available multiobjective optimization (MOO) methods. The proposed approach, called "Multiobjective evolutionary algorithms with many tables" (MEAMT), can efficiently deal with four objectives through the combination thereof, performing a more adequate sampling of the objective space. Therefore, this method can better map the promising regions in this space, predicting structures in a purely ab initio way. In other words, MEAMT is an efficient optimization method for MOO, which explores simultaneously the search space as well as the objective space. MEAMT can predict structures with one or two domains with RMSDs comparable to values obtained by recently developed ab initio methods (GAPFCG , I-PAES, and Quark) that use different levels of earlier knowledge. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
Modelling 1-minute directional observations of the global irradiance.
NASA Astrophysics Data System (ADS)
Thejll, Peter; Pagh Nielsen, Kristian; Andersen, Elsa; Furbo, Simon
2016-04-01
Direct and diffuse irradiances from the sky has been collected at 1-minute intervals for about a year from the experimental station at the Technical University of Denmark for the IEA project "Solar Resource Assessment and Forecasting". These data were gathered by pyrheliometers tracking the Sun, as well as with apertured pyranometers gathering 1/8th and 1/16th of the light from the sky in 45 degree azimuthal ranges pointed around the compass. The data are gathered in order to develop detailed models of the potentially available solar energy and its variations at high temporal resolution in order to gain a more detailed understanding of the solar resource. This is important for a better understanding of the sub-grid scale cloud variation that cannot be resolved with climate and weather models. It is also important for optimizing the operation of active solar energy systems such as photovoltaic plants and thermal solar collector arrays, and for passive solar energy and lighting to buildings. We present regression-based modelling of the observed data, and focus, here, on the statistical properties of the model fits. Using models based on the one hand on what is found in the literature and on physical expectations, and on the other hand on purely statistical models, we find solutions that can explain up to 90% of the variance in global radiation. The models leaning on physical insights include terms for the direct solar radiation, a term for the circum-solar radiation, a diffuse term and a term for the horizon brightening/darkening. The purely statistical model is found using data- and formula-validation approaches picking model expressions from a general catalogue of possible formulae. The method allows nesting of expressions, and the results found are dependent on and heavily constrained by the cross-validation carried out on statistically independent testing and training data-sets. Slightly better fits -- in terms of variance explained -- is found using the purely statistical fitting/searching approach. We describe the methods applied, results found, and discuss the different potentials of the physics- and statistics-only based model-searches.
NASA Astrophysics Data System (ADS)
Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio
2013-05-01
Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.
Corrosion Analysis of an Experimental Noble Alloy on Commercially Pure Titanium Dental Implants
Bortagaray, Manuel Alberto; Ibañez, Claudio Arturo Antonio; Ibañez, Maria Constanza; Ibañez, Juan Carlos
2016-01-01
Objective: To determine whether the Noble Bond® Argen® alloy was electrochemically suitable for the manufacturing of prosthetic superstructures over commercially pure titanium (c.p. Ti) implants. Also, the electrolytic corrosion effects over three types of materials used on prosthetic suprastructures that were coupled with titanium implants were analysed: Noble Bond® (Argen®), Argelite 76sf +® (Argen®), and commercially pure titanium. Materials and Methods: 15 samples were studied, consisting in 1 abutment and one c.p. titanium implant each. They were divided into three groups, namely: Control group: five c.p Titanium abutments (B&W®), Test group 1: five Noble Bond® (Argen®) cast abutments and, Test group 2: five Argelite 76sf +® (Argen®) abutments. In order to observe the corrosion effects, the surface topography was imaged using a confocal microscope. Thus, three metric parameters (Sa: Arithmetical mean height of the surface. Sp: Maximum height of peaks. Sv: Maximum height of valleys.), were measured at three different areas: abutment neck, implant neck and implant body. The samples were immersed in artificial saliva for 3 months, after which the procedure was repeated. The metric parameters were compared by statistical analysis. Results: The analysis of the Sa at the level of the implant neck, abutment neck and implant body, showed no statistically significant differences on combining c.p. Ti implants with the three studied alloys. The Sp showed no statistically significant differences between the three alloys. The Sv showed no statistically significant differences between the three alloys. Conclusion: The effects of electrogalvanic corrosion on each of the materials used when they were in contact with c.p. Ti showed no statistically significant differences. PMID:27733875
Measured, modeled, and causal conceptions of fitness
Abrams, Marshall
2012-01-01
This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804
NASA Astrophysics Data System (ADS)
Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.
2016-06-01
In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.
A Space–Time Permutation Scan Statistic for Disease Outbreak Detection
Kulldorff, Martin; Heffernan, Richard; Hartman, Jessica; Assunção, Renato; Mostashari, Farzad
2005-01-01
Background The ability to detect disease outbreaks early is important in order to minimize morbidity and mortality through timely implementation of disease prevention and control measures. Many national, state, and local health departments are launching disease surveillance systems with daily analyses of hospital emergency department visits, ambulance dispatch calls, or pharmacy sales for which population-at-risk information is unavailable or irrelevant. Methods and Findings We propose a prospective space–time permutation scan statistic for the early detection of disease outbreaks that uses only case numbers, with no need for population-at-risk data. It makes minimal assumptions about the time, geographical location, or size of the outbreak, and it adjusts for natural purely spatial and purely temporal variation. The new method was evaluated using daily analyses of hospital emergency department visits in New York City. Four of the five strongest signals were likely local precursors to citywide outbreaks due to rotavirus, norovirus, and influenza. The number of false signals was at most modest. Conclusion If such results hold up over longer study times and in other locations, the space–time permutation scan statistic will be an important tool for local and national health departments that are setting up early disease detection surveillance systems. PMID:15719066
NASA Astrophysics Data System (ADS)
Obraztsov, S. M.; Konobeev, Yu. V.; Birzhevoy, G. A.; Rachkov, V. I.
2006-12-01
The dependence of mechanical properties of ferritic/martensitic (F/M) steels on irradiation temperature is of interest because these steels are used as structural materials for fast, fusion reactors and accelerator driven systems. Experimental data demonstrating temperature peaks in physical and mechanical properties of neutron irradiated pure iron, nickel, vanadium, and austenitic stainless steels are available in the literature. A lack of such an information for F/M steels forces one to apply a computational mathematical-statistical modeling methods. The bootstrap procedure is one of such methods that allows us to obtain the necessary statistical characteristics using only a sample of limited size. In the present work this procedure is used for modeling the frequency distribution histograms of ultimate strength temperature peaks in pure iron and Russian F/M steels EP-450 and EP-823. Results of fitting the sums of Lorentz or Gauss functions to the calculated distributions are presented. It is concluded that there are two temperature (at 360 and 390 °C) peaks of the ultimate strength in EP-450 steel and single peak at 390 °C in EP-823.
Topological charge and cooling scales in pure SU(2) lattice gauge theory
NASA Astrophysics Data System (ADS)
Berg, Bernd A.; Clarke, David A.
2018-03-01
Using Monte Carlo simulations with overrelaxation, we have equilibrated lattices up to β =2.928 , size 6 04, for pure SU(2) lattice gauge theory with the Wilson action. We calculate topological charges with the standard cooling method and find that they become more reliable with increasing β values and lattice sizes. Continuum limit estimates of the topological susceptibility χ are obtained of which we favor χ1 /4/Tc=0.643 (12 ) , where Tc is the SU(2) deconfinement temperature. Differences between cooling length scales in different topological sectors turn out to be too small to be detectable within our statistical errors.
ERIC Educational Resources Information Center
Sun, Shuyan; Pan, Wei
2014-01-01
As applications of multilevel modelling in educational research increase, researchers realize that multilevel data collected in many educational settings are often not purely nested. The most common multilevel non-nested data structure is one that involves student mobility in longitudinal studies. This article provides a methodological review of…
Calculations of the surface tensions of liquid metals
NASA Technical Reports Server (NTRS)
Stroud, D. G.
1981-01-01
The understanding of the surface tension of liquid metals and alloys from as close to first principles as possible is discussed. The two ingredients which are combined in these calculations are: the electron theory of metals, and the classical theory of liquids, as worked out within the framework of statistical mechanics. The results are a new theory of surface tensions and surface density profiles from knowledge purely of the bulk properties of the coexisting liquid and vapor phases. It is found that the method works well for the pure liquid metals on which it was tested; work is extended to mixtures of liquid metals, interfaces between immiscible liquid metals, and to the temperature derivative of the surface tension.
Verma, Amit; Bhani, Deepa; Tomar, Vinay; Bachhiwal, Rekha; Yadav, Shersingh
2016-06-01
Catheter Associated Urinary Tract Infections (CAUTI) are one of the most common cause of nosocomial infections. Many bacterial species show biofilm production, which provides survival benefit to them by providing protection from environmental stresses and causing decreased susceptibility to antimicrobial agents. Two most common types of catheters used in our setup are pure silicone catheter and silicone coated latex catheter. The advantage of pure silicone catheter for long term catheterization is well established. But there is still a controversy about any advantage of the silicone catheter regarding bacterial colonization rates and their biofilm production property. The aim of our study was to compare the bacterial colonization and the biofilm formation property of the colonizing bacteria in patients with indwelling pure silicone and silicone coated latex catheters. This prospective observational study was conducted in the Urology Department of our institute. Patients who needed catheterization for more than 5 days during the period July 2015 to January 2016 and had sterile precatheterisation urine were included in the study. Patients were grouped into 2 groups of 50 patients each, Group A with the pure silicone catheter and Group B with the silicone coated latex catheter. Urine culture was done on the 6(th) day of indwelling urinary catheter drainage. If growth was detected, then that bacterium was tested for biofilm production property by tissue culture plate method. Statistical analyses were performed using the Statistical Package for the Social Science Version 22 (SPSS-22). After 5 days of indwelling catheterization, the pure silicone catheter had significantly less bacterial colonization than the silicone coated latex catheter (p-value=0.03) and the biofilm forming property of colonizing bacteria was also significantly less in the pure silicone catheter as compared to the silicone coated latex catheter (p-value=0.02). There were no significant differences in the colonizing bacteria in the 2 groups. In both the groups the most common bacteria were Escherichia coli. The pure silicone catheter is advantageous over the silicone coated latex catheter in terms of incidence of bacterial colonization as well as the biofilm formation and hence in the management of CAUTI.
Quantum thermalization through entanglement in an isolated many-body system.
Kaufman, Adam M; Tai, M Eric; Lukin, Alexander; Rispoli, Matthew; Schittko, Robert; Preiss, Philipp M; Greiner, Markus
2016-08-19
Statistical mechanics relies on the maximization of entropy in a system at thermal equilibrium. However, an isolated quantum many-body system initialized in a pure state remains pure during Schrödinger evolution, and in this sense it has static, zero entropy. We experimentally studied the emergence of statistical mechanics in a quantum state and observed the fundamental role of quantum entanglement in facilitating this emergence. Microscopy of an evolving quantum system indicates that the full quantum state remains pure, whereas thermalization occurs on a local scale. We directly measured entanglement entropy, which assumes the role of the thermal entropy in thermalization. The entanglement creates local entropy that validates the use of statistical physics for local observables. Our measurements are consistent with the eigenstate thermalization hypothesis. Copyright © 2016, American Association for the Advancement of Science.
Cielecka-Piontek, Judyta
2013-07-01
A simple and selective derivative spectrophotometric method was developed for the quantitative determination of faropenem in pure form and in pharmaceutical dosage. The method is based on the zero-crossing effect of first-derivative spectrophotometry (λ = 324 nm), which eliminates the overlapping effect caused by the excipients present in the pharmaceutical preparation, as well as degradation products, formed during hydrolysis, oxidation, photolysis, and thermolysis. The method was linear in the concentration range 2.5-300 μg/mL (r = 0.9989) at λ = 341 nm; the limits of detection and quantitation were 0.16 and 0.46 μg/mL, respectively. The method had good precision (relative standard deviation from 0.68 to 2.13%). Recovery of faropenem ranged from 97.9 to 101.3%. The first-order rate constants of the degradation of faropenem in pure form and in pharmaceutical dosage were determined by using first-derivative spectrophotometry. A statistical comparison of the validation results and the observed rate constants for faropenem degradation with these obtained with the high-performance liquid chromatography method demonstrated that both were compatible.
Significance of noisy signals in periodograms
NASA Astrophysics Data System (ADS)
Süveges, Maria
2015-08-01
The detection of tiny periodic signals in noisy and irregularly sampled time series is a challenging task. Once a small peak is found in the periodogram, the next step is to see how probable it is that pure noise produced a peak so extreme - that is to say, compute its False Alarm Probability (FAP). This useful measure quantifies the statistical plausibility of the found signal among the noise. However, its derivation from statistical principles is very hard due to the specificities of astronomical periodograms, such as oversampling and the ensuing strong correlation among its values at different frequencies. I will present a method to compute the FAP based on extreme-value statistics (Süveges 2014), and compare it to two other methods proposed by Baluev (2008) and Paltani (2004) and Schwarzenberg-Czerny (2012) on signals with various signal shapes and at different signal-to-noise ratios.
Amer, Sawsan M; Abbas, Samah S; Shehata, Mostafa A; Ali, Nahed M
2008-01-01
A simple and reliable high-performance liquid chromatographic method was developed for the simultaneous determination of mixture of phenylephrine hydrochloride (PHENYL), guaifenesin (GUAIF), and chlorpheniramine maleate (CHLO) either in pure form or in the presence of methylparaben and propylparaben in a commercial cough syrup dosage form. Separation was achieved on a C8 column using 0.005 M heptane sulfonic acid sodium salt (pH 3.4 +/- 0.1) and acetonitrile as a mobile phase by gradient elution at different flow rates, and detection was done spectrophotometrically at 210 nm. A linear relationship in the range of 30-180, 120-1800, and 10-60 microg/mL was obtained for PHENYL, GUAIF, and CHLO, respectively. The results were statistically analyzed and compared with those obtained by applying the British Pharmacopoeia (2002) method and showed that the proposed method is precise, accurate, and can be easily applied for the determination of the drugs under investigation in pure form and in cough syrup formulations.
Nonlinear scalar forcing based on a reaction analogy
NASA Astrophysics Data System (ADS)
Daniel, Don; Livescu, Daniel
2017-11-01
We present a novel reaction analogy (RA) based forcing method for generating stationary passive scalar fields in incompressible turbulence. The new method can produce more general scalar PDFs (e.g. double-delta) than current methods, while ensuring that scalar fields remain bounded, unlike existent forcing methodologies that can potentially violate naturally existing bounds. Such features are useful for generating initial fields in non-premixed combustion or for studying non-Gaussian scalar turbulence. The RA method mathematically models hypothetical chemical reactions that convert reactants in a mixed state back into its pure unmixed components. Various types of chemical reactions are formulated and the corresponding mathematical expressions derived. For large values of the scalar dissipation rate, the method produces statistically steady double-delta scalar PDFs. Gaussian scalar statistics are recovered for small values of the scalar dissipation rate. In contrast, classical forcing methods consistently produce unimodal Gaussian scalar fields. The ability of the new method to produce fully developed scalar fields is discussed using 2563, 5123, and 10243 periodic box simulations.
NASA Astrophysics Data System (ADS)
Hegazy, Maha Abdel Monem; Fayez, Yasmin Mohammed
2015-04-01
Two different methods manipulating spectrophotometric data have been developed, validated and compared. One is capable of removing the signal of any interfering components at the selected wavelength of the component of interest (univariate). The other includes more variables and extracts maximum information to determine the component of interest in the presence of other components (multivariate). The applied methods are smart, simple, accurate, sensitive, precise and capable of determination of spectrally overlapped antihypertensives; hydrochlorothiazide (HCT), irbesartan (IRB) and candesartan (CAN). Mean centering of ratio spectra (MCR) and concentration residual augmented classical least-squares method (CRACLS) were developed and their efficiency was compared. CRACLS is a simple method that is capable of extracting the pure spectral profiles of each component in a mixture. Correlation was calculated between the estimated and pure spectra and was found to be 0.9998, 0.9987 and 0.9992 for HCT, IRB and CAN, respectively. The methods were successfully determined the three components in bulk powder, laboratory-prepared mixtures, and combined dosage forms. The results obtained were compared statistically with each other and to those of the official methods.
NASA Astrophysics Data System (ADS)
Ashour, Safwan; Bayram, Roula
2015-04-01
New, accurate, sensitive and reliable kinetic spectrophotometric method for the assay of moxifloxacin hydrochloride (MOXF) in pure form and pharmaceutical formulations has been developed. The method involves the oxidative coupling reaction of MOXF with 3-methyl-2-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) in the presence of Ce(IV) in an acidic medium to form colored product with lambda max at 623 and 660 nm. The reaction is followed spectrophotometrically by measuring the increase in absorbance at 623 nm as a function of time. The initial rate and fixed time methods were adopted for constructing the calibration curves. The linearity range was found to be 1.89-40.0 μg mL-1 for initial rate and fixed time methods. The limit of detection for initial rate and fixed time methods is 0.644 and 0.043 μg mL-1, respectively. Molar absorptivity for the method was found to be 0.89 × 104 L mol-1 cm-1. Statistical treatment of the experimental results indicates that the methods are precise and accurate. The proposed method has been applied successfully for the estimation of moxifloxacin hydrochloride in tablet dosage form with no interference from the excipients. The results are compared with the official method.
Colorimetric microdetermination of captopril in pure form and in pharmaceutical formulations
NASA Astrophysics Data System (ADS)
Shama, Sayed Ahmed; El-Sayed Amin, Alla; Omara, Hany
2006-11-01
A simple, rapid, accurate, precise and sensitive colorimetric method for the determination of captopril (CAP) in bulk sample and in dosage forms is described. The method is based on oxidation of the drug by potassium permanganate in acidic medium and determination of the unreacted oxidant by measuring the decrease in absorbance for five different dyes; methylene blue (MB); acid blue 74 (AB), acid red 73 (AR), amaranth dye (AM) and acid orange 7 (AO) at a suitable λmax (660, 610, 510, 520, and 485 nm), respectively. Regression analysis of Beer's plots showed good correlation in the concentration ranges (0.4 12.5, 0.3 10, 0.5 11, 0.4 8.3 and 0.5 9.3 μg ml-1), respectively. The apparent molar absorbtivity, Sandell sensitivity, detection and quantitation limits were calculated. For more accurate results, Ringbom optimum concentration ranges were 0.5 12, 0.5 9.6, 0.6 10.5, 0.5 8.0 and 0.7 9.0 μg ml-1, respectively. The validity of the proposed method was tested by analyzing in pure and dosage forms containing CAP whether alone or in combination with hydrochlorothiazide. Statistical analysis of the results reflects that the proposed procedures are precise, accurate and easily applicable for the determination of CAP in pure form and in pharmaceutical preparations. Also, the stability constant was determined and the free energy change was calculated potentiometrically.
Statistical analysis and digital processing of the Mössbauer spectra
NASA Astrophysics Data System (ADS)
Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri
2010-02-01
This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.
NASA Astrophysics Data System (ADS)
Dong, J.; Liu, W.; Han, W.; Lei, T.; Xia, J.; Yuan, W.
2017-12-01
Winter wheat is a staple food crop for most of the world's population, and the area and spatial distribution of winter wheat are key elements in estimating crop production and ensuring food security. However, winter wheat planting areas contain substantial spatial heterogeneity with mixed pixels for coarse- and moderate-resolution satellite data, leading to significant errors in crop acreage estimation. This study has developed a phenology-based approach using moderate-resolution satellite data to estimate sub-pixel planting fractions of winter wheat. Based on unmanned aerial vehicle (UAV) observations, the unique characteristics of winter wheat with high vegetation index values at the heading stage (May) and low values at the harvest stage (June) were investigated. The differences in vegetation index between heading and harvest stages increased with the planting fraction of winter wheat, and therefore the planting fractions were estimated by comparing the NDVI differences of a given pixel with those of predetermined pure winter wheat and non-winter wheat pixels. This approach was evaluated using aerial images and agricultural statistical data in an intensive agricultural region, Shandong Province in North China. The method explained 60% and 85% of the spatial variation in county- and municipal-level statistical data, respectively. More importantly, the predetermined pure winter wheat and non-winter wheat pixels can be automatically identified using MODIS data according to their NDVI differences, which strengthens the potential to use this method at regional and global scales without any field observations as references.
Abdulrahman, Sameer A. M.; Basavaiah, Kanakapura
2011-01-01
Two simple and selective spectrophotometric methods have been proposed for the determination of gabapentin (GBP) in pure form and in capsules. Both methods are based on the proton transfer from the Lewis acid such as 2,4,6-trinitrophenol (picric acid; PA) or 2,4-dinitrophenol (2,4-DNP) to the primary amino group of GBP which works as Lewis base and formation of yellow ion-pair complexes. The ion-pair complexes formed show absorption maximum at 415 and 420 nm for PA and 2,4-DNP, respectively. Under the optimized experimental conditions, Beer's law is obeyed over the concentration ranges of 1.25–15.0 and 2.0–18.0 μg mL−1 GBP for PA and 2,4-DNP methods, respectively. The molar absorptivity, Sandell's sensitivity, detection and, quantification limits for both methods are also reported. The proposed methods were applied successfully to the determination of GBP in pure form and commercial capsules. Statistical comparison of the results was performed using Student's t-test and F-ratio at 95% confidence level, and there was no significant difference between the reference and proposed methods with regard to accuracy and precision. Further, the validity of the proposed methods was confirmed by recovery studies via standard addition technique. PMID:21760787
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less
Monitoring of an antigen manufacturing process.
Zavatti, Vanessa; Budman, Hector; Legge, Raymond; Tamer, Melih
2016-06-01
Fluorescence spectroscopy in combination with multivariate statistical methods was employed as a tool for monitoring the manufacturing process of pertactin (PRN), one of the virulence factors of Bordetella pertussis utilized in whopping cough vaccines. Fluorophores such as amino acids and co-enzymes were detected throughout the process. The fluorescence data collected at different stages of the fermentation and purification process were treated employing principal component analysis (PCA). Through PCA, it was feasible to identify sources of variability in PRN production. Then, partial least square (PLS) was employed to correlate the fluorescence spectra obtained from pure PRN samples and the final protein content measured by a Kjeldahl test from these samples. In view that a statistically significant correlation was found between fluorescence and PRN levels, this approach could be further used as a method to predict the final protein content.
Study of photonuclear muon interactions at Baksan underground scintillation telescope
NASA Technical Reports Server (NTRS)
Bakatanov, V. N.; Chudakov, A. E.; Dadykin, V. L.; Novoseltsev, Y. F.; Achkasov, V. M.; Semenov, A. M.; Stenkin, Y. V.
1985-01-01
The method of pion-muon-electron decays recording was used to distinguish between purely electron-photon and hadronic cascades, induced by high energy muons underground. At energy approx. 1 Tev a ratio of the number of hadronic to electromagnetic cascades was found equal 0.11 + or - .03 in agreement with expectation. But, at an energy approx. 4 Tev a sharp increase of this ratio was indicated though not statistically sound (0.52 + or - .13).
NASA Astrophysics Data System (ADS)
Salem, A. A.; Mossa, H. A.; Barsoum, B. N.
2005-11-01
Rapid, specific and simple methods for determining levofloxacin and rifampicin antibiotic drugs in pharmaceutical and human urine samples were developed. The methods are based on 1H NMR spectroscopy using maleic acid as an internal standard and DMSO-d6 as NMR solvent. Integration of NMR signals at 8.9 and 8.2 ppm were, respectively, used for calculating the concentration of levofloxacin and rifampicin drugs per unit dose. Maleic acid signal at 6.2 ppm was used as the reference signal. Recoveries of (97.0-99.4) ± 0.5 and (98.3-99.7) ± 1.08% were obtained for pure levofloxacin and rifampicin, respectively. Corresponding recoveries of 98.5-100.3 and 96.8-100.0 were, respectively, obtained in pharmaceutical capsules and urine samples. Relative standard deviations (R.S.D.) values ≤2.7 were obtained for analyzed drugs in pure, pharmaceutical and urine samples. Statistical Student's t-test gave t-values ≤2.87 indicating insignificant difference between the real and the experimental values at the 95% confidence level. F-test revealed insignificant difference in precisions between the developed NMR methods and each of fluorimetric and HPLC methods for analyzing levofloxacin and rifampicin.
Statistical theory of chromatography: new outlooks for affinity chromatography.
Denizot, F C; Delaage, M A
1975-01-01
We have developed further the statistical approach to chromatography initiated by Giddings and Eyring, and applied it to affinity chromatography. By means of a convenient expression of moments the convergence towards the Laplace-Gauss distribution has been established. The Gaussian character is not preserved if other causes of dispersion are taken into account, but expressions of moments can be obtained in a generalized form. A simple procedure is deduced for expressing the fundamental constants of the model in terms of purely experimental quantities. Thus, affinity chromatography can be used to determine rate constants of association and dissociation in a range considered as the domain of the stopped-flow methods. PMID:1061072
Research on the Applicable Method of Valuation of Pure Electric Used vehicles
NASA Astrophysics Data System (ADS)
Cai, yun; Tan, zhengping; Wang, yidong; Mao, pan
2018-03-01
With the rapid growth in the ownership of pure electric vehicles, the research on the valuation of used electric vehicles has become the key to the development of the pure electric used vehicle market. The paper analyzed the application of the three value assessment methods, current market price method, capitalized earning method and replacement cost method, in pure electric used vehicles, and draws a conclusion that the replacement cost method is more suitable for pure electric used car. At the same time, the article also conducted a parametric correction exploration research, aiming at the characteristics of pure electric vehicles and replacement cost of the constituent factors. Through the analysis of the applicability parameters of physical devaluation, functional devaluation and economic devaluation, the revised replacement cost method can be used for the valuation of purely used electric vehicles for private use.
Brooks, V J; De Wolfe, T J; Paulus, T J; Xu, J; Cai, J; Keuler, N S; Godbee, R G; Peek, S F; McGuirk, S M; Darien, B J
2012-01-01
We have previously reported that Morinda citrifolia (noni) puree modulates neonatal calves developmental maturation of the innate and adaptive immune system. In this study, the effect of noni puree on respiratory and gastrointestinal (GI), health in preweaned dairy calves on a farm with endemic salmonellosis was examined. Two clinical trials were conducted whereby each trial evaluated one processing technique of noni puree. Trials 1 and 2 tested noni versions A and B, respectively. Puree analysis and trial methods were identical to each other, with the calf as the experimental unit. Calves were designated to 1 of 3 treatment groups in each trial and received either: 0, 15 or 30 mL every 12 hr of noni supplement for the first 3 weeks of life. Health scores, weaning age, weight gain from admission to weaning, and weaned by 6 weeks, were used as clinical endpoints for statistical analysis. In trial 1, calves supplemented with 15 mL noni puree of version A every 12 hr had a higher probability of being weaned by 6 weeks of age than control calves (P = 0.04). In trial 2, calves receiving 30 mL of version B every 12 hr had a 54.5% reduction in total medical treatments by 42 days of age when compared to controls (P = 0.02). There was a trend in reduced respiratory (61%), and GI (52%) medical treatments per calf when compared to controls (P = 0.06 and 0.08, respectively). There were no differences in weight gain or mortality for any treatment group in either trial.
Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S
2008-01-01
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Mitigating the impact of the DESI fiber assignment on galaxy clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burden, Angela; Padmanabhan, Nikhil; Cahn, Robert N.
2017-03-01
We present a simple strategy to mitigate the impact of an incomplete spectroscopic redshift galaxy sample as a result of fiber assignment and survey tiling. The method has been designed for the Dark Energy Spectroscopic Instrument (DESI) galaxy survey but may have applications beyond this. We propose a modification to the usual correlation function that nulls the almost purely angular modes affected by survey incompleteness due to fiber assignment. Predictions of this modified statistic can be calculated given a model of the two point correlation function. The new statistic can be computed with a slight modification to the data cataloguesmore » input to the standard correlation function code and does not incur any additional computational time. Finally we show that the spherically averaged baryon acoustic oscillation signal is not biased by the new statistic.« less
Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2015-09-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2016-02-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.
Sanchez, Lilia Maria; Lalonde, Lucie; Trop, Isabelle; David, Julie; Mesurolle, Benoît
2017-01-01
Objective: To assess the impact on the final outcome at surgery of flat epithelial atypia (FEA) when found concomitantly with lobular neoplasia (LN) in biopsy specimens compared with pure biopsy-proven FEA. Methods: The approval from the institutional review board of the CHUM (Centre Hospitalier Universitaire de Montréal) was obtained. A retrospective review of our database between 2009 and 2013 identified 81 females (mean age 54 years, range 38–90 years) with 81 FEA biopsy-proven lesions. These were pure or associated with LN only in 59/81 (73%) and 22/81 (27%) cases, respectively. Overall, 57/81 (70%) patients underwent surgery and 24/81 (30%) patients underwent mammographic surveillance with a mean follow-up of 36 months. Results: FEA presented more often as microcalcifications in 68/81 (84%) patients and were mostly amorphous in 49/68 (72%). After excluding radio pathologically discordant cases, pure FEA proved to be malignant at surgery in 1/41 (2%; 95% confidence interval 0.06–12.9). There was no statistically significant difference in the upgrade to malignancy whether FEA lesions were pure or associated to LN at biopsy (p = 0.4245); however, when paired in biopsy specimens, these lesions were more frequently associated with atypical ductal hyperplasia (ADH) at surgery than with pure FEA (p = 0.012). Conclusion: Our results show a 2% upgrade rate to malignancy of pure FEA lesions. When FEA is found in association with LN at biopsy, surgical excision yields more frequently ADH than pure FEA thus warranting close surveillance or even surgical excision. Advances in knowledge: The association of LN with FEA at biopsy was more frequently associated with ADH at surgery than with pure FEA. If a biopsy-proven FEA lesion is deemed concordant with the imaging finding, when paired with LN at biopsy, careful surveillance or even surgical excision is suggested. PMID:28118035
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent agreement. This was attributed to the high quality of the measurements which reduced the width of the correctly identified, noise-affected pure air distribution, with respect to the turbulent mixing distribution. The ignitability of the atmospheric jet is determined using the flammability factor calculated from both kernel density estimated (KDE) PDFs and PDFs generated using the newly proposed model. Agreement between contours from both approaches is excellent. Ignitability of the under-expanded jet is also calculated using KDE PDFs. Contours are compared with those calculated by applying the atmospheric model to the under-expanded jet. Once again, agreement is excellent. This work demonstrates that self-similar scalar mixing statistics and ignitability of atmospheric jets can be accurately described by the proposed model. This description can be applied with confidence to under-expanded jets, which are more realistic of leak and fuel injection scenarios.
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
NASA Technical Reports Server (NTRS)
Ellis, David L.
2007-01-01
Room temperature tensile testing of Chemically Pure (CP) Titanium Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K for times up to 5,000 h. No significant changes in microstructure or failure mechanism were observed. A statistical analysis of the data was performed. Small statistical differences were found, but all properties were well above minimum values for CP Ti Grade 2 as defined by ASTM standards and likely would fall within normal variation of the material.
Herget, Meike; Scheibinger, Mirko; Guo, Zhaohua; Jan, Taha A; Adams, Christopher M; Cheng, Alan G; Heller, Stefan
2013-01-01
Mechanosensitive hair cells and supporting cells comprise the sensory epithelia of the inner ear. The paucity of both cell types has hampered molecular and cell biological studies, which often require large quantities of purified cells. Here, we report a strategy allowing the enrichment of relatively pure populations of vestibular hair cells and non-sensory cells including supporting cells. We utilized specific uptake of fluorescent styryl dyes for labeling of hair cells. Enzymatic isolation and flow cytometry was used to generate pure populations of sensory hair cells and non-sensory cells. We applied mass spectrometry to perform a qualitative high-resolution analysis of the proteomic makeup of both the hair cell and non-sensory cell populations. Our conservative analysis identified more than 600 proteins with a false discovery rate of <3% at the protein level and <1% at the peptide level. Analysis of proteins exclusively detected in either population revealed 64 proteins that were specific to hair cells and 103 proteins that were only detectable in non-sensory cells. Statistical analyses extended these groups by 53 proteins that are strongly upregulated in hair cells versus non-sensory cells and vice versa by 68 proteins. Our results demonstrate that enzymatic dissociation of styryl dye-labeled sensory hair cells and non-sensory cells is a valid method to generate pure enough cell populations for flow cytometry and subsequent molecular analyses.
NASA Astrophysics Data System (ADS)
Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb
2014-01-01
The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.
Trends in Citations to Books on Epidemiological and Statistical Methods in the Biomedical Literature
Porta, Miquel; Vandenbroucke, Jan P.; Ioannidis, John P. A.; Sanz, Sergio; Fernandez, Esteve; Bhopal, Raj; Morabia, Alfredo; Victora, Cesar; Lopez, Tomàs
2013-01-01
Background There are no analyses of citations to books on epidemiological and statistical methods in the biomedical literature. Such analyses may shed light on how concepts and methods changed while biomedical research evolved. Our aim was to analyze the number and time trends of citations received from biomedical articles by books on epidemiological and statistical methods, and related disciplines. Methods and Findings The data source was the Web of Science. The study books were published between 1957 and 2010. The first year of publication of the citing articles was 1945. We identified 125 books that received at least 25 citations. Books first published in 1980–1989 had the highest total and median number of citations per year. Nine of the 10 most cited texts focused on statistical methods. Hosmer & Lemeshow's Applied logistic regression received the highest number of citations and highest average annual rate. It was followed by books by Fleiss, Armitage, et al., Rothman, et al., and Kalbfleisch and Prentice. Fifth in citations per year was Sackett, et al., Evidence-based medicine. The rise of multivariate methods, clinical epidemiology, or nutritional epidemiology was reflected in the citation trends. Educational textbooks, practice-oriented books, books on epidemiological substantive knowledge, and on theory and health policies were much less cited. None of the 25 top-cited books had the theoretical or sociopolitical scope of works by Cochrane, McKeown, Rose, or Morris. Conclusions Books were mainly cited to reference methods. Books first published in the 1980s continue to be most influential. Older books on theory and policies were rooted in societal and general medical concerns, while the most modern books are almost purely on methods. PMID:23667447
Porta, Miquel; Vandenbroucke, Jan P; Ioannidis, John P A; Sanz, Sergio; Fernandez, Esteve; Bhopal, Raj; Morabia, Alfredo; Victora, Cesar; Lopez, Tomàs
2013-01-01
There are no analyses of citations to books on epidemiological and statistical methods in the biomedical literature. Such analyses may shed light on how concepts and methods changed while biomedical research evolved. Our aim was to analyze the number and time trends of citations received from biomedical articles by books on epidemiological and statistical methods, and related disciplines. The data source was the Web of Science. The study books were published between 1957 and 2010. The first year of publication of the citing articles was 1945. We identified 125 books that received at least 25 citations. Books first published in 1980-1989 had the highest total and median number of citations per year. Nine of the 10 most cited texts focused on statistical methods. Hosmer & Lemeshow's Applied logistic regression received the highest number of citations and highest average annual rate. It was followed by books by Fleiss, Armitage, et al., Rothman, et al., and Kalbfleisch and Prentice. Fifth in citations per year was Sackett, et al., Evidence-based medicine. The rise of multivariate methods, clinical epidemiology, or nutritional epidemiology was reflected in the citation trends. Educational textbooks, practice-oriented books, books on epidemiological substantive knowledge, and on theory and health policies were much less cited. None of the 25 top-cited books had the theoretical or sociopolitical scope of works by Cochrane, McKeown, Rose, or Morris. Books were mainly cited to reference methods. Books first published in the 1980s continue to be most influential. Older books on theory and policies were rooted in societal and general medical concerns, while the most modern books are almost purely on methods.
Pancorbo, Dario; Vazquez, Carlos; Fletcher, Mary Ann
2008-11-01
Previously, a novel formulation of vitamin C-lipid metabolites (PureWay-C) was shown to be more rapidly taken-up by human T-lymphocytes and more rapidly stimulate neurite outgrowth, fibroblast adhesion and inhibition of xenobiotic-induced T-cell hyperactivation. Here, PureWay-C serum levels were measured in healthy volunteers after oral supplementation. Plasma C-reactive protein and oxidized low density lipoprotein levels (LDL) were also measured. Healthy volunteers maintained a low vitamin C diet for 14 days and, following an overnight fast, received a single oral dose of (vitamin C) 1000 mg of either ascorbic acid (AA), calcium ascorbate (CaA), vitamin C-lipid metabolites (PureWay-C), or calcium ascorbate-calcium threonate-dehydroascorbate (Ester-C). Blood samples were collected immediately prior to the oral dose administration and at various times post ingestion. Twenty-four-hour urine collections were saved for oxalate and uric acid assays. PureWay-C supplementation leads to the highest absolute serum vitamin C levels when compared to AA, CaA and Ester-C. PureWay-C provides a statistically significant greater serum level than calcium ascorbate at 1, 2, 4, and 6 hours post oral supplementation whereas Ester-C shows a less but slightly statistically significant increase at only 1 and 4 hours. Oral supplementation with PureWay-C also led to a greater reduction in plasma C-reactive protein and oxidized LDL levels compared to the other vitamin C formulations. PureWay-C is more rapidly absorbed and leads to higher serum vitamin C levels and greater reduction of plasma levels of inflammatory and oxidative stress markers than other forms of vitamin C, including Ester-C.
Symbolic Processing Combined with Model-Based Reasoning
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.
Method of production of pure hydrogen near room temperature from aluminum-based hydride materials
Pecharsky, Vitalij K.; Balema, Viktor P.
2004-08-10
The present invention provides a cost-effective method of producing pure hydrogen gas from hydride-based solid materials. The hydride-based solid material is mechanically processed in the presence of a catalyst to obtain pure gaseous hydrogen. Unlike previous methods, hydrogen may be obtained from the solid material without heating, and without the addition of a solvent during processing. The described method of hydrogen production is useful for energy conversion and production technologies that consume pure gaseous hydrogen as a fuel.
NASA Technical Reports Server (NTRS)
Ellis, David L.
2012-01-01
Elevated-temperature tensile testing of commercially pure titanium (CP Ti) Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K (531 and 711 F) for times up to 5000 h. The tensile testing revealed some statistical differences between the 11 thermal treatments, but most thermal treatments were statistically equivalent. Previous data from room temperature tensile testing was combined with the new data to allow regression and development of mathematical models relating tensile properties to temperature and thermal exposure. The results indicate that thermal exposure temperature has a very small effect, whereas the thermal exposure duration has no statistically significant effects on the tensile properties. These results indicate that CP Ti Grade 2 will be thermally stable and suitable for long-duration space missions.
Tiossi, Rodrigo; Rodrigues, Renata Cristina Silveira; de Mattos, Maria da Glória Chiarello; Ribeiro, Ricardo Faria
2008-01-01
This study compared the vertical misfit of 3-unit implant-supported nickel-chromium (Ni-Cr) and cobalt-chromium (Co-Cr) alloy and commercially pure titanium (cpTi) frameworks after casting as 1 piece, after sectioning and laser welding, and after simulated porcelain firings. The results on the tightened side showed no statistically significant differences. On the opposite side, statistically significant differences were found for Co-Cr alloy (118.64 microm [SD: 91.48] to 39.90 microm [SD: 27.13]) and cpTi (118.56 microm [51.35] to 27.87 microm [12.71]) when comparing 1-piece to laser-welded frameworks. With both sides tightened, only Co-Cr alloy showed statistically significant differences after laser welding. Ni-Cr alloy showed the lowest misfit values, though the differences were not statistically significantly different. Simulated porcelain firings revealed no significant differences.
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Evolution of statistical properties for a nonlinearly propagating sinusoid.
Shepherd, Micah R; Gee, Kent L; Hanford, Amanda D
2011-07-01
The nonlinear propagation of a pure sinusoid is considered using time domain statistics. The probability density function, standard deviation, skewness, kurtosis, and crest factor are computed for both the amplitude and amplitude time derivatives as a function of distance. The amplitude statistics vary only in the postshock realm, while the amplitude derivative statistics vary rapidly in the preshock realm. The statistical analysis also suggests that the sawtooth onset distance can be considered to be earlier than previously realized. © 2011 Acoustical Society of America
Nanohole optical tweezers in heterogeneous mixture analysis
NASA Astrophysics Data System (ADS)
Hacohen, Noa; Ip, Candice J. X.; Laxminarayana, Gurunatha K.; DeWolf, Timothy S.; Gordon, Reuven
2017-08-01
Nanohole optical trapping is a tool that has been shown to analyze proteins at the single molecule level using pure samples. The next step is to detect and study single molecules with dirty samples. We demonstrate that using our double nanohole optical tweezing configuration, single particles in an egg white solution can be classified when trapped. Different sized molecules provide different signal variations in their trapped state, allowing the proteins to be statistically characterized. Root mean squared variation and trap stiffness are methods used on trapped signals to distinguish between the different proteins. This method to isolate and determine single molecules in heterogeneous samples provides huge potential to become a reliable tool for use within biomedical and scientific communities.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
The possible modifications of the Hisse model for pure LANDSAT agricultural data
NASA Technical Reports Server (NTRS)
Peters, C.
1982-01-01
An idea, due to A. Feiveson, is presented for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field). Theoretical arguments are given which show that any significant refinement of the model beyond Feiveson's proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Guassian with a covariance function which exhibits exponential decay with respect to spatial separation.
Duong, N; Torre, P; Springer, G; Cox, C; Plankey, MW
2017-01-01
Objective Research has established that human immunodeficiency virus (HIV) causes hearing loss. Studies have yet to evaluate the impact on quality of life (QOL). This project evaluates the effect of hearing loss on QOL by HIV status. Methods The study participants were from the Multicenter AIDS Cohort Study (MACS) and the Women's Interagency HIV study (WIHS). A total of 248 men and 127 women participated. Pure-tone air conduction thresholds were collected for each ear at frequencies from 250 through 8000 Hz. Pure-tone averages (PTAs) for each ear were calculated as the mean of air conduction thresholds in low frequencies (i.e., 250, 500, 1000 and 2000 Hz) and high frequencies (i.e., 3000, 4000, 6000 and 8000 Hz). QOL data were gathered with the Short Form 36 Health Survey and Medical Outcome Study (MOS)-HIV instrument in the MACS and WIHS, respectively. A median regression analysis was performed to test the association of PTAs with QOL by HIV status. Results There was no significant association between hearing loss and QOL scores at low and high pure tone averages in HIV positive and negative individuals. HIV status, HIV biomarkers and treatment did not change the lack of association of low and high pure tone averages with poorer QOL. Conclusion Although we did not find a statistically significant association of hearing loss with QOL by HIV status, testing for hearing loss with aging and recommending treatment may offset any presumed later life decline in QOL. PMID:28217403
Accurate mass measurement: terminology and treatment of data.
Brenton, A Gareth; Godfrey, A Ruth
2010-11-01
High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.
A statistical study of decaying kink oscillations detected using SDO/AIA
NASA Astrophysics Data System (ADS)
Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.
2016-01-01
Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.
Kakinuma, R; Ashizawa, K; Kobayashi, T; Fukushima, A; Hayashi, H; Kondo, T; Machida, M; Matsusako, M; Minami, K; Oikado, K; Okuda, M; Takamatsu, S; Sugawara, M; Gomi, S; Muramatsu, Y; Hanai, K; Muramatsu, Y; Kaneko, M; Tsuchiya, R; Moriyama, N
2012-01-01
Objectives The objective of this study was to compare the sensitivity of detection of lung nodules on low-dose screening CT images between radiologists and technologists. Methods 11 radiologists and 10 technologists read the low-dose screening CT images of 78 subjects. On images with a slice thickness of 5 mm, there were 60 lung nodules that were ≥5 mm in diameter: 26 nodules with pure ground-glass opacity (GGO), 7 nodules with mixed ground-glass opacity (GGO with a solid component) and 27 solid nodules. On images with a slice thickness of 2 mm, 69 lung nodules were ≥5 mm in diameter: 35 pure GGOs, 7 mixed GGOs and 27 solid nodules. The 21 observers read screening CT images of 5-mm slice thickness at first; then, 6 months later, they read screening CT images of 2-mm slice thickness from the 78 subjects. Results The differences in the mean sensitivities of detection of the pure GGOs, mixed GGOs and solid nodules between radiologists and technologists were not statistically significant, except for the case of solid nodules; the p-values of the differences for pure GGOs, mixed GGOs and solid nodules on the CT images with 5-mm slice thickness were 0.095, 0.461 and 0.005, respectively, and the corresponding p-values on CT images of 2-mm slice thickness were 0.971, 0.722 and 0.0037, respectively. Conclusion Well-trained technologists may contribute to the detection of pure and mixed GGOs ≥5 mm in diameter on low-dose screening CT images. PMID:22919013
Kim, Yeon-Hee; Koak, Jai-Young; Chang, Ik-Tae; Wennerberg, Ann; Heo, Seong-Joo
2003-01-01
One major factor in the success and biocompatibility of an implant is its surface properties. The purposes of this study were to analyze the surface characteristics of implants after blasting and thermal oxidation and to evaluate the bone response around these implants with histomorphometric analysis. Threaded implants (3.75 mm in diameter, 8.0 mm in length) were manufactured by machining a commercially pure titanium (grade 2). A total of 48 implants were evaluated with histomorphometric methods and included in the statistical analyses. Two different groups of samples were prepared according to the following procedures: Group 1 samples were blasted with 50-microm aluminum oxide (Al2O3) particles, and group 2 samples were blasted with 50-microm Al2O3, then thermally oxidized at 800 degrees C for 2 hours in a pure oxygen atmosphere. A noncontacting optical profilometer was used to measure the surface topography. The surface composition of the implants used and the oxide thickness were investigated with Rutherford backscattering spectrometry. The different preparations produced implant surfaces with essentially similar chemical composition, but with different oxide thickness and roughness. The morphologic evaluation of the bone formation revealed that: (1) the percentage of bone-to-implant contact of the oxidized implants (33.3%) after 4 weeks was greater than that of the blasted group (23.1%); (2) the percentages of bone-to-implant contact after 12 weeks were not statistically significantly different between the groups; (3) the percentages of bone area inside the thread after 4 weeks and 12 weeks were not statistically significantly different between groups. This investigation demonstrated the possibility that different surface treatments, such as blasting and oxidation, have an effect on the ingrowth of bone into the thread. However, the clinical implications of surface treatments on implants, and the exact mechanisms by which the surface properties of the implant affect the process of osseointegration, remain subjects for further study.
Mabood, Fazal; Abbas, Ghulam; Jabeen, Farah; Naureen, Zakira; Al-Harrasi, Ahmed; Hamaed, Ahmad M; Hussain, Javid; Al-Nabhani, Mahmood; Al Shukaili, Maryam S; Khan, Alamgir; Manzoor, Suryyia
2018-03-01
Cows' butterfat may be adulterated with animal fat materials like tallow which causes increased serum cholesterol and triglycerides levels upon consumption. There is no reliable technique to detect and quantify tallow adulteration in butter samples in a feasible way. In this study a highly sensitive near-infrared (NIR) spectroscopy combined with chemometric methods was developed to detect as well as quantify the level of tallow adulterant in clarified butter samples. For this investigation the pure clarified butter samples were intentionally adulterated with tallow at the following percentage levels: 1%, 3%, 5%, 7%, 9%, 11%, 13%, 15%, 17% and 20% (wt/wt). Altogether 99 clarified butter samples were used including nine pure samples (un-adulterated clarified butter) and 90 clarified butter samples adulterated with tallow. Each sample was analysed by using NIR spectroscopy in the reflection mode in the range 10,000-4000 cm -1 , at 2 cm -1 resolution and using the transflectance sample accessory which provided a total path length of 0.5 mm. Chemometric models including principal components analysis (PCA), partial least-squares discriminant analysis (PLSDA), and partial least-squares regressions (PLSR) were applied for statistical treatment of the obtained NIR spectral data. The PLSDA model was employed to differentiate pure butter samples from those adulterated with tallow. The employed model was then externally cross-validated by using a test set which included 30% of the total butter samples. The excellent performance of the model was proved by the low RMSEP value of 1.537% and the high correlation factor of 0.95. This newly developed method is robust, non-destructive, highly sensitive, and economical with very minor sample preparation and good ability to quantify less than 1.5% of tallow adulteration in clarified butter samples.
Sampling methods to the statistical control of the production of blood components.
Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo
2017-12-01
The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hu, Ting; Chen, Yuanzhu; Kiralis, Jeff W; Collins, Ryan L; Wejse, Christian; Sirugo, Giorgio; Williams, Scott M; Moore, Jason H
2013-01-01
Background Epistasis has been historically used to describe the phenomenon that the effect of a given gene on a phenotype can be dependent on one or more other genes, and is an essential element for understanding the association between genetic and phenotypic variations. Quantifying epistasis of orders higher than two is very challenging due to both the computational complexity of enumerating all possible combinations in genome-wide data and the lack of efficient and effective methodologies. Objectives In this study, we propose a fast, non-parametric, and model-free measure for three-way epistasis. Methods Such a measure is based on information gain, and is able to separate all lower order effects from pure three-way epistasis. Results Our method was verified on synthetic data and applied to real data from a candidate-gene study of tuberculosis in a West African population. In the tuberculosis data, we found a statistically significant pure three-way epistatic interaction effect that was stronger than any lower-order associations. Conclusion Our study provides a methodological basis for detecting and characterizing high-order gene-gene interactions in genetic association studies. PMID:23396514
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2013-01-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
P values are only an index to evidence: 20th- vs. 21st-century statistical science.
Burnham, K P; Anderson, D R
2014-03-01
Early statistical methods focused on pre-data probability statements (i.e., data as random variables) such as P values; these are not really inferences nor are P values evidential. Statistical science clung to these principles throughout much of the 20th century as a wide variety of methods were developed for special cases. Looking back, it is clear that the underlying paradigm (i.e., testing and P values) was weak. As Kuhn (1970) suggests, new paradigms have taken the place of earlier ones: this is a goal of good science. New methods have been developed and older methods extended and these allow proper measures of strength of evidence and multimodel inference. It is time to move forward with sound theory and practice for the difficult practical problems that lie ahead. Given data the useful foundation shifts to post-data probability statements such as model probabilities (Akaike weights) or related quantities such as odds ratios and likelihood intervals. These new methods allow formal inference from multiple models in the a prior set. These quantities are properly evidential. The past century was aimed at finding the "best" model and making inferences from it. The goal in the 21st century is to base inference on all the models weighted by their model probabilities (model averaging). Estimates of precision can include model selection uncertainty leading to variances conditional on the model set. The 21st century will be about the quantification of information, proper measures of evidence, and multi-model inference. Nelder (1999:261) concludes, "The most important task before us in developing statistical science is to demolish the P-value culture, which has taken root to a frightening extent in many areas of both pure and applied science and technology".
Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture
NASA Astrophysics Data System (ADS)
Potoff, Jeffrey J.; Panagiotopoulos, Athanassios Z.
1998-12-01
Monte Carlo simulations in the grand canonical ensemble were used to obtain liquid-vapor coexistence curves and critical points of the pure fluid and a binary mixture of Lennard-Jones particles. Critical parameters were obtained from mixed-field finite-size scaling analysis and subcritical coexistence data from histogram reweighting methods. The critical parameters of the untruncated Lennard-Jones potential were obtained as Tc*=1.3120±0.0007, ρc*=0.316±0.001 and pc*=0.1279±0.0006. Our results for the critical temperature and pressure are not in agreement with the recent study of Caillol [J. Chem. Phys. 109, 4885 (1998)] on a four-dimensional hypersphere. Mixture parameters were ɛ1=2ɛ2 and σ1=σ2, with Lorentz-Berthelot combining rules for the unlike-pair interactions. We determined the critical point at T*=1.0 and pressure-composition diagrams at three temperatures. Our results have much smaller statistical uncertainties relative to comparable Gibbs ensemble simulations.
Manual for the Jet Event and Background Simulation Library(JEBSimLib)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, Matthias; Soltz, Ron; Angerami, Aaron
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
Manual for the Jet Event and Background Simulation Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.; Angerami, A.
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
NASA Astrophysics Data System (ADS)
Sudhakar, Beeravelli; Krishna, Mylangam Chaitanya; Murthy, Kolapalli Venkata Ramana
2016-01-01
The aim of the present study was to formulate and evaluate the ritonavir-loaded stealth liposomes by using 32 factorial design and intended to delivered by parenteral delivery. Liposomes were prepared by ethanol injection method using 32 factorial designs and characterized for various physicochemical parameters such as drug content, size, zeta potential, entrapment efficiency and in vitro drug release. The optimization process was carried out using desirability and overlay plots. The selected formulation was subjected to PEGylation using 10 % PEG-10000 solution. Stealth liposomes were characterized for the above-mentioned parameters along with surface morphology, Fourier transform infrared spectrophotometer, differential scanning calorimeter, stability and in vivo pharmacokinetic studies in rats. Stealth liposomes showed better result compared to conventional liposomes due to effect of PEG-10000. The in vivo studies revealed that stealth liposomes showed better residence time compared to conventional liposomes and pure drug solution. The conventional liposomes and pure drug showed dose-dependent pharmacokinetics, whereas stealth liposomes showed long circulation half-life compared to conventional liposomes and pure ritonavir solution. The results of statistical analysis showed significance difference as the p value is (<0.05) by one-way ANOVA. The result of the present study revealed that stealth liposomes are promising tool in antiretroviral therapy.
2013-01-01
Background As a result of changes in climatic conditions and greater resistance to insecticides, many regions across the globe, including Colombia, have been facing a resurgence of vector-borne diseases, and dengue fever in particular. Timely information on both (1) the spatial distribution of the disease, and (2) prevailing vulnerabilities of the population are needed to adequately plan targeted preventive intervention. We propose a methodology for the spatial assessment of current socioeconomic vulnerabilities to dengue fever in Cali, a tropical urban environment of Colombia. Methods Based on a set of socioeconomic and demographic indicators derived from census data and ancillary geospatial datasets, we develop a spatial approach for both expert-based and purely statistical-based modeling of current vulnerability levels across 340 neighborhoods of the city using a Geographic Information System (GIS). The results of both approaches are comparatively evaluated by means of spatial statistics. A web-based approach is proposed to facilitate the visualization and the dissemination of the output vulnerability index to the community. Results The statistical and the expert-based modeling approach exhibit a high concordance, globally, and spatially. The expert-based approach indicates a slightly higher vulnerability mean (0.53) and vulnerability median (0.56) across all neighborhoods, compared to the purely statistical approach (mean = 0.48; median = 0.49). Both approaches reveal that high values of vulnerability tend to cluster in the eastern, north-eastern, and western part of the city. These are poor neighborhoods with high percentages of young (i.e., < 15 years) and illiterate residents, as well as a high proportion of individuals being either unemployed or doing housework. Conclusions Both modeling approaches reveal similar outputs, indicating that in the absence of local expertise, statistical approaches could be used, with caution. By decomposing identified vulnerability “hotspots” into their underlying factors, our approach provides valuable information on both (1) the location of neighborhoods, and (2) vulnerability factors that should be given priority in the context of targeted intervention strategies. The results support decision makers to allocate resources in a manner that may reduce existing susceptibilities and strengthen resilience, and thus help to reduce the burden of vector-borne diseases. PMID:23945265
NASA Astrophysics Data System (ADS)
Bianchi, Eugenio; Haggard, Hal M.; Rovelli, Carlo
2017-08-01
We show that in Oeckl's boundary formalism the boundary vectors that do not have a tensor form represent, in a precise sense, statistical states. Therefore the formalism incorporates quantum statistical mechanics naturally. We formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, suggesting that local gravitational processes are naturally statistical without a sharp quantal versus probabilistic distinction.
Fontana, F; Rapone, C; Bregola, G; Aversa, R; de Meo, A; Signorini, G; Sergio, M; Ferrarini, A; Lanzellotto, R; Medoro, G; Giorgini, G; Manaresi, N; Berti, A
2017-07-01
Latest genotyping technologies allow to achieve a reliable genetic profile for the offender identification even from extremely minute biological evidence. The ultimate challenge occurs when genetic profiles need to be retrieved from a mixture, which is composed of biological material from two or more individuals. In this case, DNA profiling will often result in a complex genetic profile, which is then subject matter for statistical analysis. In principle, when more individuals contribute to a mixture with different biological fluids, their single genetic profiles can be obtained by separating the distinct cell types (e.g. epithelial cells, blood cells, sperm), prior to genotyping. Different approaches have been investigated for this purpose, such as fluorescent-activated cell sorting (FACS) or laser capture microdissection (LCM), but currently none of these methods can guarantee the complete separation of different type of cells present in a mixture. In other fields of application, such as oncology, DEPArray™ technology, an image-based, microfluidic digital sorter, has been widely proven to enable the separation of pure cells, with single-cell precision. This study investigates the applicability of DEPArray™ technology to forensic samples analysis, focusing on the resolution of the forensic mixture problem. For the first time, we report here the development of an application-specific DEPArray™ workflow enabling the detection and recovery of pure homogeneous cell pools from simulated blood/saliva and semen/saliva mixtures, providing full genetic match with genetic profiles of corresponding donors. In addition, we assess the performance of standard forensic methods for DNA quantitation and genotyping on low-count, DEPArray™-isolated cells, showing that pure, almost complete profiles can be obtained from as few as ten haploid cells. Finally, we explore the applicability in real casework samples, demonstrating that the described approach provides complete separation of cells with outstanding precision. In all examined cases, DEPArray™ technology proves to be a groundbreaking technology for the resolution of forensic biological mixtures, through the precise isolation of pure cells for an incontrovertible attribution of the obtained genetic profiles. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Button, D. K.; Schut, Frits; Quang, Pham; Martin, Ravonna; Robertson, Betsy R.
1993-01-01
Dilution culture, a method for growing the typical small bacteria from natural aquatic assemblages, has been developed. Each of 11 experimental trials of the technique was successful. Populations are measured, diluted to a small and known number of cells, inoculated into unamended sterilized seawater, and examined three times for the presence of 104 or more cells per ml over a 9-week interval. Mean viability for assemblage members is obtained from the frequency of growth, and many of the cultures produced are pure. Statistical formulations for determining viability and the frequency of pure culture production are derived. Formulations for associated errors are derived as well. Computer simulations of experiments agreed with computed values within the expected error, which verified the formulations. These led to strategies for optimizing viability determinations and pure culture production. Viabilities were usually between 2 and 60% and decreased with >5 mg of amino acids per liter as carbon. In view of difficulties in growing marine oligobacteria, these high values are noteworthy. Significant differences in population characteristics during growth, observed by high-resolution flow cytometry, suggested substantial population diversity. Growth of total populations as well as of cytometry-resolved subpopulations sometimes were truncated at levels of near 104 cells per ml, showing that viable cells could escape detection. Viability is therefore defined as the ability to grow to that population; true viabilities could be even higher. Doubling times, based on whole populations as well as individual subpopulations, were in the 1-day to 1-week range. Data were examined for changes in viability with dilution suggesting cell-cell interactions, but none could be confirmed. The frequency of pure culture production can be adjusted by inoculum size if the viability is known. These apparently pure cultures produced retained the size and apparent DNA-content characteristic of the bulk of the organisms in the parent seawater. Three cultures are now available, two of which have been carried for 3 years. The method is thus seen as a useful step for improving our understanding of typical aquatic organisms. PMID:16348896
Rehabilitation of pure alexia: A review
Starrfelt, Randi; Ólafsdóttir, Rannveig Rós; Arendt, Ida-Marie
2013-01-01
Acquired reading problems caused by brain injury (alexia) are common, either as a part of an aphasic syndrome, or as an isolated symptom. In pure alexia, reading is impaired while other language functions, including writing, are spared. Being in many ways a simple syndrome, one would think that pure alexia was an easy target for rehabilitation efforts. We review the literature on rehabilitation of pure alexia from 1990 to the present, and find that patients differ widely on several dimensions, such as alexia severity and associated deficits. Many patients reported to have pure alexia in the reviewed studies, have associated deficits such as agraphia or aphasia and thus do not strictly conform to the diagnosis. Few studies report clear and generalisable effects of training, none report control data, and in many cases the reported findings are not supported by statistics. We can, however, tentatively conclude that Multiple Oral Re-reading techniques may have some effect in mild pure alexia where diminished reading speed is the main problem, while Tacile-Kinesthetic training may improve letter identification in more severe cases of alexia. There is, however, still a great need for well-designed and controlled studies of rehabilitation of pure alexia. PMID:23808895
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Method of preparing pure fluorine gas
Asprey, Larned B.
1976-01-01
A simple, inexpensive system for purifying and storing pure fluorine is described. The method utilizes alkali metal-nickel fluorides to absorb tank fluorine by forming nickel complex salts and leaving the gaseous impurities which are pumped away. The complex nickel fluoride is then heated to evolve back pure gaseous fluorine.
Welvaert, Marijke; Caley, Peter
2016-01-01
Citizen science and crowdsourcing have been emerging as methods to collect data for surveillance and/or monitoring activities. They could be gathered under the overarching term citizen surveillance . The discipline, however, still struggles to be widely accepted in the scientific community, mainly because these activities are not embedded in a quantitative framework. This results in an ongoing discussion on how to analyze and make useful inference from these data. When considering the data collection process, we illustrate how citizen surveillance can be classified according to the nature of the underlying observation process measured in two dimensions-the degree of observer reporting intention and the control in observer detection effort. By classifying the observation process in these dimensions we distinguish between crowdsourcing, unstructured citizen science and structured citizen science. This classification helps the determine data processing and statistical treatment of these data for making inference. Using our framework, it is apparent that published studies are overwhelmingly associated with structured citizen science, and there are well developed statistical methods for the resulting data. In contrast, methods for making useful inference from purely crowd-sourced data remain under development, with the challenges of accounting for the unknown observation process considerable. Our quantitative framework for citizen surveillance calls for an integration of citizen science and crowdsourcing and provides a way forward to solve the statistical challenges inherent to citizen-sourced data.
Jenkins, Martin
2016-01-01
Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084
Vinay, K. B.; Revanasiddappa, H. D.; Raghu, M. S.; Abdulrahman, Sameer. A. M.; Rajendraprasad, N.
2012-01-01
Two simple, selective, and rapid spectrophotometric methods are described for the determination of mycophenolate mofetil (MPM) in pure form and in tablets. Both methods are based on charge-transfer complexation reaction of MPM with p-chloranilic acid (p-CA) or 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) in dioxane-acetonitrile medium resulting in coloured product measurable at 520 nm (p-CA) or 580 nm (DDQ). Beer's law is obeyed over the concentration ranges of 40–400 and 12–120 μg mL−1 MPM for p-CA and DDQ, respectively, with correlation coefficients (r) of 0.9995 and 0.9947. The apparent molar absorptivity values are calculated to be 1.06 × 103 and 3.87 × 103 L mol−1 cm−1, respectively, and the corresponding Sandell's sensitivities are 0.4106 and 0.1119 μg cm−1. The limits of detection (LOD) and quantification (LOQ) are also reported for both methods. The described methods were successfully applied to the determination of MPM in tablets. Statistical comparison of the results with those of the reference method showed excellent agreement. No interference was observed from the common excipients present in tablets. Both methods were validated statistically for accuracy and precision. The accuracy and reliability of the methods were further ascertained by recovery studies via standard addition procedure. PMID:22567572
40 CFR 799.6755 - TSCA partition coefficient (n-octanol/water), shake flask method.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Qualifying statements. This method applies only to pure, water soluble substances which do not dissociate or... applies to a pure substance dispersed between two pure solvents. If several different solutes occur in one... applied. The values presented in table 1 of this section are not necessarily representative of the results...
Salem, Alaa A; Mossa, Hussein A
2012-01-15
Selective, rapid and accurate quantitative proton nuclear magnetic resonance (qHNMR) method for the determination of levofloxacin, metronidazole benzoate and sulfamethoxazole in aqueous solutions was developed and validated. The method was successfully applied to the determinations of the drugs and their admixtures in pharmaceutical, urine and plasma samples. Maleic acid and sodium malate were used as internal standards. Effect of temperature on spectral measurements was evaluated. Linear dynamic ranges of 0.50-68.00, 0.13-11.30 and 0.24-21.00 mg per 0.60 mL solution were obtained for levofloxacin, metronidazole benzoate and sulfamethoxazole, respectively. Average recovery % in the range of 96.00-104.20 ± (0.17-2.91) was obtained for drugs in pure, pharmaceutical, plasma and urine samples. Inter and intra-day analyses gave average recoveries % in the ranges 96.10-98.40 ± (1.68-2.81) and 96.00-104.20 ± (0.17-2.91), respectively. Instrumental detection limits ≤0.03 mg per 0.6 mL were obtained for the three drugs. Developed method has demonstrated high performance characteristics for analyzing investigated drugs and their admixtures. Student t-test at 95% confidence level revealed insignificant bias between the real and measured contents of investigated drugs in pure, pharmaceutical, urine and plasma samples and its admixtures. Application of the statistical F-test revealed insignificant differences in precisions between the developed method and arbitrary selected reference methods. Copyright © 2011 Elsevier B.V. All rights reserved.
Dentinal tubule occluding capability of nano-hydroxyapatite; The in-vitro evaluation.
Baglar, Serdar; Erdem, Umit; Dogan, Mustafa; Turkoz, Mustafa
2018-04-29
In this in-vitro study, the effectiveness of experimental pure nano-hydroxyapatite (nHAP) and 1%, 2%, and 3% F¯ doped nano-HAp on dentine tubule occlusion was investigated. And also, the cytotoxicity of materials used in the experiment was evaluated. Nano-HAp types were synthesized by the precipitation method. Forty dentin specimens were randomly divided into five groups of; 1-no treatment (control), 2-specimens treated with 10% pure nano-HAp and 3, 4, 5 specimens treated with 1%, 2%, and 3% F - doped 10% nano-HAp, respectively. To evaluate the effectiveness of the materials used; pH, FTIR, and scanning electron microscopy evaluations were performed before and after degredation in simulated body fluid. To determine cytotoxicity of the materials, MTT assay was performed. Statistical evaluations were performed with F and t tests. All of the nano-HAp materials used in this study built up an effective covering layer on the dentin surfaces even with plugs in tubules. It was found that this layer had also a resistance to degradation. None of the evaluated nano-HAp types were have toxicity. Fluoride doping showed a positive effect on physical and chemical stability until a critical value of 1% F - . The all evaluated nano-HAp types may be effectively used in dentin hypersensitivity treatment. The formed nano-HAp layers were seem to resistant to hydrolic deletion. The pure and 1% F - doped nano-HAp showed the highest biocompatibility thus it was assessed that pure and 1% F - doped materials may be used as an active ingredient in dentin hypersensitivity agents. © 2018 Wiley Periodicals, Inc.
Lu, Alex Y; Turban, Jack L; Damisah, Eyiyemisi C; Li, Jie; Alomari, Ahmed K; Eid, Tore; Vortmeyer, Alexander O; Chiang, Veronica L
2017-08-01
OBJECTIVE Following an initial response of brain metastases to Gamma Knife radiosurgery, regrowth of the enhancing lesion as detected on MRI may represent either radiation necrosis (a treatment-related inflammatory change) or recurrent tumor. Differentiation of radiation necrosis from tumor is vital for management decision making but remains difficult by imaging alone. In this study, gas chromatography with time-of-flight mass spectrometry (GC-TOF) was used to identify differential metabolite profiles of the 2 tissue types obtained by surgical biopsy to find potential targets for noninvasive imaging. METHODS Specimens of pure radiation necrosis and pure tumor obtained from patient brain biopsies were flash-frozen and validated histologically. These formalin-free tissue samples were then analyzed using GC-TOF. The metabolite profiles of radiation necrosis and tumor samples were compared using multivariate and univariate statistical analysis. Statistical significance was defined as p ≤ 0.05. RESULTS For the metabolic profiling, GC-TOF was performed on 7 samples of radiation necrosis and 7 samples of tumor. Of the 141 metabolites identified, 17 (12.1%) were found to be statistically significantly different between comparison groups. Of these metabolites, 6 were increased in tumor, and 11 were increased in radiation necrosis. An unsupervised hierarchical clustering analysis found that tumor had elevated levels of metabolites associated with energy metabolism, whereas radiation necrosis had elevated levels of metabolites that were fatty acids and antioxidants/cofactors. CONCLUSIONS To the authors' knowledge, this is the first tissue-based metabolomics study of radiation necrosis and tumor. Radiation necrosis and recurrent tumor following Gamma Knife radiosurgery for brain metastases have unique metabolite profiles that may be targeted in the future to develop noninvasive metabolic imaging techniques.
[Study on the change of optical zone after femtosecond laser assisted laser in situ keratomileusis].
Li, H; Chen, M; Tian, L; Li, D W; Peng, Y S; Zhang, F F
2018-01-11
Objective: To explore the change of optical zone after femtosecond laser assisted laser in sitn keratomileusis(FS-LASIK) so as to provide the reference for measurement and design of clinical optical zone. Methods: This retrospective case series study covers 41 eyes of 24 patients (7 males and 17 females, aged from 18 to 42 years old) with myopia and myopic astigmatism who have received FS-LASIK surgery at Corneal Refractive Department of Qingdao Eye Hospital and completed over 6 months of clinical follow-up. Pentacam system (with the application of 6 corneal topographic map modes including: the pure axial curvature topographic map, the pure tangential curvature topographic map, the axial curvature difference topographic map, the tangential curvature difference topographic map, the postoperative front elevation map and the corneal thickness difference topographic map), combined with transparent concentric software (a system independently developed by Qingdao Eye Hospital) was used to measure the optical zone at 1, 3 and 6 months postoperatively, the optical zone diameters measurement results among different follow-up times in group were analyzed with the repeated measures analysis of variance, and the actual measured values and the theoretical design values of the optical zone were analyzed with independent-samples t-testing. Spearman correlation coefficient ( r(s) ) have been applied to evaluate the relationship between postoperative optical zone measurement values and the potential influencing factors. Results: The optical zone diameters measured by pure axial curvature topographic map at 1, 3 and 6 months after FS-LASIK showed (6.55±0.50)mm, (6.50±0.53)mm and (6.48±0.53)mm respectively. The differences between values are of no statistical significance ( F= 1.60, P= 0.21), the optical zone diameter measured by pure tangential curvature topographic map at 1, 3 and 6 months after FS-LASIK showed (5.44±0.46)mm, (5.46±0.52)mm and (5.44±0.50)mm respectively, the differences between values are of no statistical significance ( F= 0.17, P= 0.85). The optical zone diameters measured by postoperative front elevation map at 1, 3 and 6 months after FS-LASIK showed (5.06±0.28)mm, (5.12±0.32)mm and (5.17±0.28)mm respectively. The differences between the values of 3 and 6 months postoperatively are of no statistical significance ( F= 6.14, P= 0.15), the optical zone diameters measured by axial curvature difference topographic map at 1, 3 and 6 months after FS-LASIK showed (6.51±0.37)mm, (6.45±0.41)mm and (6.41±0.40)mm respectively, and the differences between the values of 3 and 6 months postoperatively are of no statistical significance ( F= 7.25, P= 0.05). The optical zone diameters measured by tangential curvature difference topographic map at 1, 3 and 6 months after FS-LASIK showed (5.21±0.23)mm, (5.16±0.19)mm and (5.17±0.20) mm respectively, and the differences between the values of 1 and 3 months postoperatively are of statistical significance ( F= 1.75, P= 0.04). The optical zone diameters measured by corneal thickness difference topographic map at 1, 3 and 6 months after FS-LASIK showed (6.53±0.40)mm, (6.39±0.43)mm and (6.41±0.47)mm respectively, and the differences between the values of 1 and 3 months postoperatively are of statistical significance ( F= 1.67, P= 0.032). The actual measured optical zone values from the 6 different modes of Pentacam system are less than the theoretical design values (7.75 mm), and the differences were statistical significance ( t= -15.42, -29.39, -59.27, -21.47, -81.69, -18.22, P< 0.01). Conclusions: The optical zone measurement values tend to be stable at 3 months after FS-LASIK. The actual measured values from all the 6 different modes of Pentacam system were less than the theoretical design values. The results from pure tangential curvature topographic map, the tangential curvature difference topographic map and the postoperative front elevation map showed greater variation with clear border, which was beneficial for eccentric research. The results from pure axial curvature topographic map, the axial curvature difference topographic map and the corneal thickness difference topographic map were close to the theoretically designed values. Furthermore, the axial curvature difference topographic map showed clearer border and less variation thus maybe more favorable for measuring optical zone in clinical application. (Chin J Ophthalmol, 2018, 54: 39-47) .
Fael, Hanan; Sakur, Amir Al-Haj
2015-11-01
A novel, simple and specific spectrofluorimetric method was developed and validated for the determination of perindopril erbumine (PDE). The method is based on the fluorescence quenching of Rhodamine B upon adding perindopril erbumine. The quenched fluorescence was monitored at 578 nm after excitation at 500 nm. The optimization of the reaction conditions such as the solvent, reagent concentration, and reaction time were investigated. Under the optimum conditions, the fluorescence quenching was linear over a concentration range of 1.0-6.0 μg/mL. The proposed method was fully validated and successfully applied to the analysis of perindopril erbumine in pure form and tablets. Statistical comparison of the results obtained by the developed and reference methods revealed no significant differences between the methods compared in terms of accuracy and precision. The method was shown to be highly specific in the presence of indapamide, a diuretic that is commonly combined with perindopril erbumine. The mechanism of rhodamine B quenching was also discussed.
Training models of anatomic shape variability
Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang
2008-01-01
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919
Research on the Value Evaluation of Used Pure Electric Car Based on the Replacement Cost Method
NASA Astrophysics Data System (ADS)
Tan, zhengping; Cai, yun; Wang, yidong; Mao, pan
2018-03-01
In this paper, the value evaluation of the used pure electric car is carried out by the replacement cost method, which fills the blank of the value evaluation of the electric vehicle. The basic principle of using the replacement cost method, combined with the actual cost of pure electric cars, puts forward the calculation method of second-hand electric car into a new rate based on the use of AHP method to construct the weight matrix comprehensive adjustment coefficient of related factors, the improved method of value evaluation system for second-hand car
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis
NASA Astrophysics Data System (ADS)
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.
2016-07-01
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Statistical approach to tunneling time in attosecond experiments
NASA Astrophysics Data System (ADS)
Demir, Durmuş; Güner, Tuğrul
2017-11-01
Tunneling, transport of particles through classically forbidden regions, is a pure quantum phenomenon. It governs numerous phenomena ranging from single-molecule electronics to donor-acceptor transition reactions. The main problem is the absence of a universal method to compute tunneling time. This problem has been attacked in various ways in the literature. Here, in the present work, we show that a statistical approach to the problem, motivated by the imaginary nature of time in the forbidden regions, lead to a novel tunneling time formula which is real and subluminal (in contrast to various known time definitions implying superluminal tunneling). In addition to this, we show explicitly that the entropic time formula is in good agreement with the tunneling time measurements in laser-driven He ionization. Moreover, it sets an accurate range for long-range electron transfer reactions. The entropic time formula is general enough to extend to the photon and phonon tunneling phenomena.
Statistical gamma-ray decay studies at iThemba LABS
NASA Astrophysics Data System (ADS)
Wiedeking, M.; Bernstein, L. A.; Bleuel, D. L.; Brits, C. P.; Sowazi, K.; Görgen, A.; Goldblum, B. L.; Guttormsen, M.; Kheswa, B. V.; Larsen, A. C.; Majola, S. N. T.; Malatji, K. L.; Negi, D.; Nogwanya, T.; Siem, S.; Zikhali, B. R.
2017-09-01
A program to study the γ-ray decay from the region of high-level density has been established at iThemba LABS, where a high-resolution gamma-ray detector array is used in conjunction with silicon particle-telescopes. Results from two recent projects are presented: 1) The 74Ge(α,α'γ) reaction was used to investigate the Pygmy Dipole Resonance. The results were compared to (γ,γ') data and indicate that the dipole states split into mixed isospin and relatively pure isovector excitations. 2) Data from the 95Mo(d,p) reaction were used to develop a novel method for the determination of spins for low-lying discrete levels utilizing statistical γ-ray decay in the vicinity of the neutron separation energy. These results provide insight into the competition of (γ,n) and (γ,γ') reactions and highlights the need to correct for angular momentum barrier effects.
Identifiability of PBPK Models with Applications to ...
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy
Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl
2016-09-01
Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.
Laminate armor and related methods
Chu, Henry S; Lillo, Thomas M; Zagula, Thomas M
2013-02-26
Laminate armor and methods of manufacturing laminate armor. Specifically, laminate armor plates comprising a commercially pure titanium layer and a titanium alloy layer bonded to the commercially pure titanium outer layer are disclosed, wherein an average thickness of the titanium alloy inner layer is about four times an average thickness of the commercially pure titanium outer layer. In use, the titanium alloy layer is positioned facing an area to be protected. Additionally, roll-bonding methods for manufacturing laminate armor plates are disclosed.
Algebraic Algorithm Design and Local Search
1996-12-01
method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a...synthesis. Our approach was to follow KIDS in spirit, but to adopt a pure algebraic formalism, supported by Kestrel’s SPECWARE environment (79), that...design was developed that is more purely algebraic than that of KIDS. This method was then applied to local search. A general theory of local search was
An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.
Undrill, P E; Frazer, S C
1979-01-01
A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340
Alp, Alpaslan; Us, Dürdal; Hasçelik, Gülşen
2004-01-01
Rapid quantitative molecular methods are very important for the diagnosis of human immunodeficiency virus (HIV) infections, assessment of prognosis and follow up. The purpose of this study was to compare and evaluate the performances of conventional manual extraction method and automated MagNA Pure system, for the nucleic acid isolation step which is the first and most important step in molecular diagnosis of HIV infections. Plasma samples of 35 patients in which anti-HIV antibodies were found as positive by microparticule enzyme immunoassay and confirmed by immunoblotting method, were included in the study. The nucleic acids obtained simultaneously by manual isolation kit (Cobas Amplicor, HIV-1 Monitor Test, version 1.5, Roche Diagnostics) and automated system (MagNA Pure LC Total Nucleic Acid Isolation Kit, Roche Diagnostics), were amplified and detected in Cobas Amplicor (Roche Diagnostics) instrument. Twenty three of 35 samples (65.7%) were found to be positive, and 9 (25.7%) were negative by both of the methods. The agreement between the methods were detected as 91.4%, for qualitative results. Viral RNA copies detected by manual and MagNA Pure isolation methods were found between 76.0-7.590.000 (mean: 487.143) and 113.0-20.300.0000 (mean: 2.174.097) copies/ml, respectively. When both of the overall and individual results were evaluated, the number of RNA copies obtained with automatized system, were found higher than the manual method (p<0.05). Three samples which had low numbers of nucleic acids (113, 773, 857, respectively) with MagNA Pure, yielded negative results with manual method. In conclusion, the automatized MagNA Pure system was found to be a reliable, rapid and practical method for the isolation of HIV-RNA.
Zidan, Dalia W; Hassan, Wafaa S; Elmasry, Manal S; Shalaby, Abdalla A
2018-06-01
Simultaneous determination of sofosbuvir (SOF), and daclatasvir (DAC) in their dosage forms, human urine and human plasma using simple and rapid micellar high performance liquid chromatographic method coupled with UV detection (HPLC-UV) had been developed and validated. These drugs are described as co-administered for treatment of Hepatitis C virus (HCV). HCV is the cause of Hepatitis C and some cancers such as liver cancer (hepatocellular carcinoma) and lymphomas in humans. Separation and quantitation were carried out on anonyx™ C 8 monolithic (100 × 4.6 mm (i.d.) analytical column maintained at 25 °C. The mobile phase consisted of 0.1 M sodium dodecyl sulfate (SDS) solution containing 20% (V/V) n-propanolol and 0.3% (V/V) triethylamine and pH was adjusted to 6.5 using 0.02 M phosphoric acid, respectively. The retention times of SOF and DAC were 4.8 min, and 6.5 min, respectively. Measurements were made at flow rate of 0.5 mL/min with injection volume of 20 μL and ultraviolet (UV) detection at 226 nm. Linearity of SOF and DAC was obtained over concentration ranges of 50-400, and 40-400 ng/mL, respectively in pure form, 60-300 and 50-300 ng/mL, respectively for human plasma and over 50-400, and 40-400 ng/mL, respectively for human urine with correlation coefficient >0.999. The proposed method demonstrated excellent intra- and inter-day precision and accuracy. The suggested method was applied for determination of the drugs in pure, dosage form, and in real human plasma, real human urine and drug-dissolution test of their tablets. The obtained results have been statistically compared to reported method to give a conclusion that there is no significant differences. Copyright © 2018 Elsevier B.V. All rights reserved.
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
NASA Astrophysics Data System (ADS)
Xu, Ronghua; Wong, Wing-Keung; Chen, Guanrong; Huang, Shuo
2017-02-01
In this paper, we analyze the relationship among stock networks by focusing on the statistically reliable connectivity between financial time series, which accurately reflects the underlying pure stock structure. To do so, we firstly filter out the effect of market index on the correlations between paired stocks, and then take a t-test based P-threshold approach to lessening the complexity of the stock network based on the P values. We demonstrate the superiority of its performance in understanding network complexity by examining the Hong Kong stock market. By comparing with other filtering methods, we find that the P-threshold approach extracts purely and significantly correlated stock pairs, which reflect the well-defined hierarchical structure of the market. In analyzing the dynamic stock networks with fixed-size moving windows, our results show that three global financial crises, covered by the long-range time series, can be distinguishingly indicated from the network topological and evolutionary perspectives. In addition, we find that the assortativity coefficient can manifest the financial crises and therefore can serve as a good indicator of the financial market development.
Sikirzhytski, Vitali; Sikirzhytskaya, Aliaksandra; Lednev, Igor K
2012-10-10
Conventional confirmatory biochemical tests used in the forensic analysis of body fluid traces found at a crime scene are destructive and not universal. Recently, we reported on the application of near-infrared (NIR) Raman microspectroscopy for non-destructive confirmatory identification of pure blood, saliva, semen, vaginal fluid and sweat. Here we expand the method to include dry mixtures of semen and blood. A classification algorithm was developed for differentiating pure body fluids and their mixtures. The classification methodology is based on an effective combination of Support Vector Machine (SVM) regression (data selection) and SVM Discriminant Analysis of preprocessed experimental Raman spectra collected using an automatic mapping of the sample. This extensive cross-validation of the obtained results demonstrated that the detection limit of the minor contributor is as low as a few percent. The developed methodology can be further expanded to any binary mixture of complex solutions, including but not limited to mixtures of other body fluids. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Uniting statistical and individual-based approaches for animal movement modelling.
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Garrido-López, Alvaro; Esquiu, Vanesa; Tena, María Teresa
2006-08-18
A pressurized fluid extraction (PFE) and gas chromatography-flame ionization detection (GC-FID) method is proposed to determine the slip agents in polyethylene (PE) films. The study of PFE variables was performed using a fractional factorial design (FFD) for screening and a central composite design (CCD) for optimizing the main variables obtained from the Pareto charts. The variables that were studied include temperature, static time, percentage of cyclohexane and the number of extraction cycles. The final condition selected was pure isopropanol (two times) at 105 degrees C for 16min. The recovery of spiked oleamide and erucamide was around 100%. The repeatability of the method was between 9.6% for oleamide and 8% for erucamide, expressed as relative standard deviation. Finally, the method was applied to determine oleamide and erucamide in several polyethylene films and the results were statistically equal to those obtained by pyrolysis and gas-phase chemiluminescence (CL).
Disposable screen-printed sensors for determination of duloxetine hydrochloride
2012-01-01
A screen-printed disposable electrode system for the determination of duloxetine hydrochloride (DL) was developed using screen-printing technology. Homemade printing has been characterized and optimized on the basis of effects of the modifier and plasticizers. The fabricated bi-electrode potentiometric strip containing both working and reference electrodes was used as duloxetine hydrochloride sensor. The proposed sensors worked satisfactorily in the concentration range from 1.0 × 10-6-1.0 × 10-2 mol L-1 with detection limit reaching 5.0 × 10-7 mol L-1 and adequate shelf life of 6 months. The method is accurate, precise and economical. The proposed method has been applied successfully for the analysis of the drug in pure and in its dosage forms. In this method, there is no interference from any common pharmaceutical additives and diluents. Results of the analysis were validated statistically by recovery studies. PMID:22264225
NASA Astrophysics Data System (ADS)
Mączka, M.; Hermanowicz, K.; Pietraszko, A.; Yordanova, A.; Koseva, I.
2014-01-01
Pure and Cr3+ doped nanosized Al2-xScx(WO4)3 solid solutions were prepared by co-precipitation method as well as Al2-xScx(WO4)3 single crystals were grown by high-temperature flux method. The obtained samples were characterized by X-ray, Raman, IR, absorption and luminescence methods. Single crystal X-ray diffraction showed that AlSc(WO4)3 is orthorhombic at room temperature with space group Pnca and trivalent cations are statistically distributed. Raman and IR studies showed that Al2-xScx(WO4)3 solid solutions show "two mode" behavior. They also showed that vibrational properties of nanosized samples have been weakly modified in comparison with the bulk materials. The luminescence and absorption spectra revealed that chromium ions occupy two sites of weak and strong crystal field strength.
NASA Technical Reports Server (NTRS)
Farmer, F. H.; Jarrett, O., Jr.; Brown, C. A., Jr.
1983-01-01
The concentration and composition of phytoplankton populations are measured by an optical method which can be used either in situ or remotely. This method is based upon the in vivo light absorption characteristics of phytoplankton. To provide a data base for testing assumptions relative to the proposed method, visible absorbance spectra of pure cultures of 20 marine phytoplankton were obtained under laboratory conditions. Descriptive and analytical statistics were computed for the absorbance spectra and were used to make comparisons between members of major taxonomic groups and between groups. Spectral variation between the members of the major taxonomic groups was observed to be considerably less than the spectral variation between these groups. In several cases the differences between the mean absorbance spectra of major taxonomic groups are significant enough to be detected with passive remote sensing techniques.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
Advanced quantitative measurement methodology in physics education research
NASA Astrophysics Data System (ADS)
Wang, Jing
The ultimate goal of physics education research (PER) is to develop a theoretical framework to understand and improve the learning process. In this journey of discovery, assessment serves as our headlamp and alpenstock. It sometimes detects signals in student mental structures, and sometimes presents the difference between expert understanding and novice understanding. Quantitative assessment is an important area in PER. Developing research-based effective assessment instruments and making meaningful inferences based on these instruments have always been important goals of the PER community. Quantitative studies are often conducted to provide bases for test development and result interpretation. Statistics are frequently used in quantitative studies. The selection of statistical methods and interpretation of the results obtained by these methods shall be connected to the education background. In this connecting process, the issues of educational models are often raised. Many widely used statistical methods do not make assumptions on the mental structure of subjects, nor do they provide explanations tailored to the educational audience. There are also other methods that consider the mental structure and are tailored to provide strong connections between statistics and education. These methods often involve model assumption and parameter estimation, and are complicated mathematically. The dissertation provides a practical view of some advanced quantitative assessment methods. The common feature of these methods is that they all make educational/psychological model assumptions beyond the minimum mathematical model. The purpose of the study is to provide a comparison between these advanced methods and the pure mathematical methods. The comparison is based on the performance of the two types of methods under physics education settings. In particular, the comparison uses both physics content assessments and scientific ability assessments. The dissertation includes three parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The reasoning ability structures for U.S. and Chinese students at different educational levels are given by the analysis. A final discussion on the advanced quantitative assessment methodology and the pure mathematical methodology is presented at the end.
Khan, Muhammad Naeem; Jan, Muhammad Rasul; Shah, Jasmin; Lee, Sang Hak
2014-05-01
A simple and sensitive chemiluminescence (CL) method was developed for the determination of citalopram in pharmaceutical preparations and human plasma. The method is based on the enhancement of the weak CL signal of the luminol-H2 O2 system. It was found that the CL signal arising from the reaction between alkaline luminol and H2 O2 was greatly increased by the addition of silver nanoparticles in the presence of citalopram. Prepared silver nanoparticles (AgNPs) were characterized by UV-visible spectroscopy and transmission electron microscopy (TEM). Various experimental parameters affecting CL intensity were studied and optimized for the determination of citalopram. Under optimized experimental conditions, CL intensity was found to be proportional to the concentration of citalopram in the range 40-2500 ng/mL, with a correlation coefficient of 0.9997. The limit of detection (LOD) and limit of quantification (LOQ) of the devised method were 3.78 and 12.62 ng/mL, respectively. Furthermore, the developed method was found to have excellent reproducibility with a relative standard deviation (RSD) of 3.65% (n = 7). Potential interference by common excipients was also studied. The method was validated statistically using recovery studies and was successfully applied to the determination of citalopram in the pure form, in pharmaceutical preparations and in spiked human plasma samples. Percentage recoveries were found to range from 97.71 to 101.99% for the pure form, from 97.84 to 102.78% for pharmaceutical preparations and from 95.65 to 100.35% for spiked human plasma. Copyright © 2013 John Wiley & Sons, Ltd.
The cosmological analysis of X-ray cluster surveys - I. A new method for interpreting number counts
NASA Astrophysics Data System (ADS)
Clerc, N.; Pierre, M.; Pacaud, F.; Sadibekova, T.
2012-07-01
We present a new method aimed at simplifying the cosmological analysis of X-ray cluster surveys. It is based on purely instrumental observable quantities considered in a two-dimensional X-ray colour-magnitude diagram (hardness ratio versus count rate). The basic principle is that even in rather shallow surveys, substantial information on cluster redshift and temperature is present in the raw X-ray data and can be statistically extracted; in parallel, such diagrams can be readily predicted from an ab initio cosmological modelling. We illustrate the methodology for the case of a 100-deg2XMM survey having a sensitivity of ˜10-14 erg s-1 cm-2 and fit at the same time, the survey selection function, the cluster evolutionary scaling relations and the cosmology; our sole assumption - driven by the limited size of the sample considered in the case study - is that the local cluster scaling relations are known. We devote special attention to the realistic modelling of the count-rate measurement uncertainties and evaluate the potential of the method via a Fisher analysis. In the absence of individual cluster redshifts, the count rate and hardness ratio (CR-HR) method appears to be much more efficient than the traditional approach based on cluster counts (i.e. dn/dz, requiring redshifts). In the case where redshifts are available, our method performs similar to the traditional mass function (dn/dM/dz) for the purely cosmological parameters, but constrains better parameters defining the cluster scaling relations and their evolution. A further practical advantage of the CR-HR method is its simplicity: this fully top-down approach totally bypasses the tedious steps consisting in deriving cluster masses from X-ray temperature measurements.
NASA Astrophysics Data System (ADS)
Marschall, R.; Mottola, S.; Su, C. C.; Liao, Y.; Rubin, M.; Wu, J. S.; Thomas, N.; Altwegg, K.; Sierks, H.; Ip, W.-H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Lai, I. L.; Skorov, Y.; Jorda, L.; Preusker, F.; Scholten, F.; Vincent, J.-B.; Osiris Team; Rosina Team
2017-09-01
Context. This paper describes the modelling of gas and dust data acquired in the period August to October 2014 from the European Space Agency's Rosetta spacecraft when it was in close proximity to the nucleus of comet 67P/Churyumov-Gerasimenko. Aims: With our 3D gas and dust comae models this work attempts to test the hypothesis that cliff activity on comet 67P/Churyumov-Gerasimenko can solely account for the local gas density data observed by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) and the dust brightnesses seen by the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) in the considered time span. Methods: The model uses a previously developed shape model of the nucleus. From this, the water sublimation rates and gas temperatures at the surface are computed. The gas expansion is modelled with a 3D Direct Simulation Monte Carlo algorithm. A dust drag algorithm is then used to compute dust volume number densities in the coma, which are then converted to brightnesses using Mie theory and a line-of-sight integration. Furthermore we have studied the impact of topographic re-radiation on the models. Results: We show that gas activity from only cliff areas produces a fit to the ROSINA/COPS data that is as statistically good as a purely insolation-driven model. In contrast, pure cliff activity does not reproduce the dust brightness observed by OSIRIS and can thus be ruled out. On the other hand, gas activity from the Hapi region in addition to cliff activity produces a statistically better fit to the ROSINA/COPS data than purely insolation-driven outgassing and also fits the OSIRIS observations rather well. We found that topographic re-radiation does not contribute significantly to the sublimation behaviour of H2O but plays an important role in how the gas flux interacts with the irregular shape of the nucleus. Conclusions: We demonstrate that fits to the observations are non-unique. We can conclude however that gas and dust activity from cliffs and the Hapi region are consistent with the ROSINA/COPS and OSIRIS data sets for the considered time span and are thus a plausible solution. Models with activity from low gravitational slopes alone provide a statistically inferior solution.
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Mueller, Sherry A; Anderson, James E; Kim, Byung R; Ball, James C
2009-04-01
Effective bacterial control in cooling-tower systems requires accurate and timely methods to count bacteria. Plate-count methods are difficult to implement on-site, because they are time- and labor-intensive and require sterile techniques. Several field-applicable methods (dipslides, Petrifilm, and adenosine triphosphate [ATP] bioluminescence) were compared with the plate count for two sample matrices--phosphate-buffered saline solution containing a pure culture of Pseudomonas fluorescens and cooling-tower water containing an undefined mixed bacterial culture. For the pure culture, (1) counts determined on nutrient agar and plate-count agar (PCA) media and expressed as colony-forming units (CFU) per milliliter were equivalent to those on R2A medium (p = 1.0 and p = 1.0, respectively); (2) Petrifilm counts were not significantly different from R2A plate counts (p = 0.99); (3) the dipslide counts were up to 2 log units higher than R2A plate counts, but this discrepancy was not statistically significant (p = 0.06); and (4) a discernable correlation (r2 = 0.67) existed between ATP readings and plate counts. For cooling-tower water samples (n = 62), (1) bacterial counts using R2A medium were higher (but not significant; p = 0.63) than nutrient agar and significantly higher than tryptone-glucose yeast extract (TGE; p = 0.03) and PCA (p < 0.001); (2) Petrifilm counts were significantly lower than nutrient agar or R2A (p = 0.02 and p < 0.001, respectively), but not statistically different from TGE, PCA, and dipslides (p = 0.55, p = 0.69, and p = 0.91, respectively); (3) the dipslide method yielded bacteria counts 1 to 3 log units lower than nutrient agar and R2A (p < 0.001), but was not significantly different from Petrifilm (p = 0.91), PCA (p = 1.00) or TGE (p = 0.07); (4) the differences between dipslides and the other methods became greater with a 6-day incubation time; and (5) the correlation between ATP readings and plate counts varied from system to system, was poor (r2 values ranged from < 0.01 to 0.47), and the ATP method was not sufficiently sensitive to measure counts below approximately 10(4) CFU/mL.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
Recovering the triple coincidence of non-pure positron emitters in preclinical PET
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chuang, Keh-Shih; Chen, Szu-Yu; Jan, Meei-Ling
2016-03-01
Non-pure positron emitters, with their long half-lives, allow for the tracing of slow biochemical processes which cannot be adequately examined by the commonly used short-lived positron emitters. Most of these isotopes emit high-energy cascade gamma rays in addition to positron decay that can be detected and create a triple coincidence with annihilation photons. Triple coincidence is discarded in most scanners, however, the majority of the triple coincidence contains true photon pairs that can be recovered. In this study, we propose a strategy for recovering triple coincidence events to raise the sensitivity of PET imaging for non-pure positron emitters. To identify the true line of response (LOR) from a triple coincidence, a framework utilizing geometrical, energy and temporal information is proposed. The geometrical criterion is based on the assumption that the LOR with the largest radial offset among the three sub pairs of triple coincidences is least likely to be a true LOR. Then, a confidence time window is used to test the valid LOR among those within triple coincidence. Finally, a likelihood ratio discriminant rule based on the energy probability density distribution of cascade and annihilation gammas is established to identify the true LOR. An Inveon preclinical PET scanner was modeled with GATE (GEANT4 application for tomographic emission) Monte Carlo software. We evaluated the performance of the proposed method in terms of identification fraction, noise equivalent count rates (NECR), and image quality on various phantoms. With the inclusion of triple coincidence events using the proposed method, the NECR was found to increase from 11% to 26% and 19% to 29% for I-124 and Br-76, respectively, when 7.4-185 MBq of activity was used. Compared to the reconstructed images using double coincidence, this technique increased the SNR by 5.1-7.3% for I-124 and 9.3-10.3% for Br-76 within the activity range of 9.25-74 MBq, without compromising the spatial resolution or contrast. We conclude that the proposed method can improve the counting statistics of PET imaging for non-pure positron emitters and is ready to be implemented on current PET systems. Parts of this work were presented at the 2012 Annual Congress of the European Association of Nuclear Medicine.
NASA Astrophysics Data System (ADS)
Lyons, M.; Siegel, Edward Carl-Ludwig
2011-03-01
Weiss-Page-Holthaus[Physica A,341,586(04); http://arxiv.org/abs/cond-mat/0403295] number-FACTORIZATION VIA BEQS BEC VS.(?) Shor-algorithm, strongly-supporting Watkins' [www.secamlocal.ex.ac.uk/people/staff/mrwatkin/] Intersection of number-theory "pure"-maths WITH (Statistical)-Physics, as Siegel[AMS Joint.Mtg.(02)-Abs.973-60-124] Benford logarithmic-law algebraic-INVERSION to ONLY BEQS with d=0 digit P (d = 0) > = oogapFULBEC ! ! ! SiegelRiemann - hypothesisproofviaRayleigh [ Phil . Trans . CLXI (1870) ] - Polya [ Math . Ann . (21) ] - [ Random - WalksElectric - Nets . , MAA (81) ] - nderson [ PRL (58) ] - localization - Siegel [ Symp . Fractals , MRSFallMtg . (89) - 5 - papers ! ! ! ] FUZZYICS = CATEGORYICS : [ LOCALITY ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > (GLOBALITY) FUNCTOR / SYNONYM / concomitancetonoise = / Fluct . - Dissip . theorem / FUNCTOR / SYNONYM / equivalence / proportionalityto = > generalized - susceptibilitypower - spectrum [ FLAT / FUNCTIONLESS / WHITE ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > HYPERBOLICITY/ZIPF-law INEVITABILITY) intersection with ONLY BEQS BEC).
Thermal and Driven Stochastic Growth of Langmuir Waves in the Solar Wind and Earth's Foreshock
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.; Anderson, R. R.
2000-01-01
Statistical distributions of Langmuir wave fields in the solar wind and the edge of Earth's foreshock are analyzed and compared with predictions for stochastic growth theory (SGT). SGT quantitatively explains the solar wind, edge, and deep foreshock data as pure thermal waves, driven thermal waves subject to net linear growth and stochastic effects, and as waves in a pure SGT state, respectively, plus radiation near the plasma frequency f(sub p). These changes are interpreted in terms of spatial variations in the beam instability's growth rate and evolution toward a pure SGT state. SGT analyses of field distributions are shown to provide a viable alternative to thermal noise spectroscopy for wave instruments with coarse frequency resolution, and to separate f(sub p) radiation from Langmuir waves.
Assessment of semen quality in pure and crossbred Jersey bulls
Kumar, Umesh; Gawande, Ajay P.; Sahatpure, Sunil K.; Patil, Manoj S.; Lakde, Chetan K.; Bonde, Sachin W.; Borkar, Pradnyankur L.; Poharkar, Ajay J.; Ramteke, Baldeo R.
2015-01-01
Aim: To compare the seminal attributes of neat, pre-freeze (at equilibration), and post-freeze (24 h after freezing) semen in pure and crossbred Jersey bulls. Materials and Methods: Total 36 ejaculates (3 ejaculates from each bull) were collected from 6 pure Jersey and 6 crossbred Jersey bulls and evaluated for various seminal attributes during neat, pre-freeze, and post-freeze semen. Results: The mean (±standard error [SE]) values of neat semen characteristics in pure and crossbred Jersey bulls were recorded such as volume (ml), color, consistency, mass activity (scale: 0-5), and sperm concentration (millions/ml). The extended semen was further investigated at pre-freeze and post-freeze stages and the mean (±SE) values recorded at neat, pre-freeze, and post-freeze semen were compared between pure and crossbred Jersey bulls; sperm motility (80.55±1.70%, 62.77±1.35%, 46.11±1.43% vs. 80.00±1.80%, 65.00±1.66%, 47.22±1.08%), live sperm count (83.63±1.08%, 71.72±1.09%, 58.67±1.02% vs. 80.00±1.08%, 67.91±1.20%, 51.63±0.97%), total abnormal sperm count (8.38±0.32%, 12.30±0.39%, 16.75±0.42% vs. 9.00±0.45%, 12.19±0.48%, 18.11±0.64%), hypo-osmotic swelling (HOS) reacted spermatozoa (71.88±0.77%, 62.05±0.80%, 47.27±1.05% vs. 72.77±1.02%, 62.11±0.89%, 45.94±1.33%), acrosome integrity (89.05±0.83%, 81.33±0.71%, 71.94±0.86% vs. 86.55±0.57%, 78.66±0.42%, 69.38±0.53%), and DNA integrity (99.88±0.07%, 100, 99.66±0.11% vs. 99.94±0.05%, 100, 99.44±0.18%,). The volume, color, consistency, sperm concentration, and initial motility in pure and crossbred Jersey bulls did not differ significantly (p>0.05). The mass activity was significantly (p<0.05) higher in pure Jersey as compare to crossbred Jersey bulls. Live sperm percentage and acrosome integrity was significantly (p<0.01) higher in pure Jersey bulls as compared to crossbred Jersey bulls. However, no statistical difference (p>0.05) was observed in abnormal sperm; HOS reacted spermatozoa and DNA integrity percentage among breeds. Conclusion: It may be concluded that the quality of pure Jersey bull semen was comparatively better than the crossbred Jersey bulls. PMID:27047028
NASA Astrophysics Data System (ADS)
Kapul, A. A.; Zubova, E. I.; Torgaev, S. N.; Drobchik, V. V.
2017-08-01
The research focuses on a pure-tone audiometer designing. The relevance of the study is proved by high incidence of an auditory analyser in older people and children. At first, the article provides information about subjective and objective audiometry methods. Secondly, we offer block-diagram and basic-circuit arrangement of device. We decided to base on STM32F407VG microcontroller and use digital pot in the function of attenuator. Third, we implemented microcontroller and PC connection. C programming language is used for microcontroller’s program and PC’s interface. Fourthly, we created the pure-tone audiometer prototype. In the future, we will implement the objective method ASSR in addition to pure-tone audiometry.
Khan, Muhammad Naeem; Jan, Muhammad Rasul; Shah, Jasmin; Lee, Sang Hak; Kim, Young Ho
2013-01-01
A highly sensitive and simple method for identifying sulpiride in pharmaceutical formulations and biological fluids is presented. The method is based on increased chemiluminescence (CL) intensity of a luminol-H2O2 system in response to the addition of Cr (III) under alkaline conditions. The CL intensity of the luminol-H2O2-Cr (III) system was greatly enhanced by the addition of sulpiride and the CL intensity was proportional to the concentration of sulpiride in a sample solution. Various parameters affecting the CL intensity were systematically investigated and optimized for determination of the sulpiride in a sample. Under the optimum conditions, the CL intensity was proportional to the concentration of sulpiride in the range of 0.068-4.0 µg/mL, with a good correlation coefficient of 0.997. The limit of detection (LOD) and limit of quantification (LOQ) were found to be 8.50 × 10(-6) µg/mL and 2.83 × 10(-5) µg/mL, respectively. The method presented here produced good reproducibility with a relative standard deviation (RSD) of 2.70% (n = 7). The effects of common excipients and metal ions were studied for their interference effect. The method was validated statistically through recovery studies and successfully applied for the determination of sulpiride in pure form, pharmaceutical preparations and spiked human plasma samples. The percentage recoveries were found to range from 99.10 to 100.05% for pure form, 98.12 to 100.18% for pharmaceutical preparations and 97.9 to 101.4% for spiked human plasma. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Issa, Y. M.; El-Hawary, W. F.; Youssef, A. F. A.; Senosy, A. R.
2010-04-01
Two simple and highly sensitive spectrophotometric methods were developed for the quantitative determination of the drug sildenafil citrate (SC), Viagra, in pure form and in pharmaceutical formulations, through ion-associate formation reactions (method A) with mono-chromotropic acid azo dyes, chromotrope 2B (I) and chromotrope 2R (II) and ion-pair reactions (method B) with bi-chromotropic acid azo dyes, 3-phenylazo-6-o-carboxyphenylazo-chromotropic acid (III), bis-3,6-(o-hydroxyphenylazo)-chromotropic acid (IV), bis-3,6-(p-N,N-dimethylphenylazo)-chromotropic acid (V) and 3-phenylazo-6-o-hydroxyphenylazo-chromotorpic acid (VI). The reaction products, extractable in methylene chloride, were quantitatively measured at 540, 520, 540, 570, 600 and 575 nm using reagents, I-VI, respectively. The reaction conditions were studied and optimized. Beer's plots were linear in the concentration ranges 3.3-87.0, 3.3-96.0, 5.0-115.0, 2.5-125.0, 8.3-166.7 and 0.8-15.0 μg mL -1 with corresponding molar absorptivities 1.02 × 10 4, 8.34 × 10 3, 6.86 × 10 3, 5.42 × 10 3, 3.35 × 10 3 and 2.32 × 10 4 L mol -1 cm -1 using reagents I-VI, respectively. The limits of detection and Sandell's sensitivities were calculated. The methods were successfully applied to the analysis of commercial tablets (Vigoran) and the recovery study reveals that there is no interference from the common excipients that are present in tablets. Statistical comparison of the results was performed with regard to accuracy and precision using Student's t- and F-tests at 95% confidence level. There is no significant difference between the reported and proposed methods with regard to accuracy and precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiswenger, Toya N.; Gallagher, Neal B.; Myers, Tanya L.
The identification of minerals, including uranium-bearing minerals, is traditionally a labor-intensive-process using x-ray diffraction (XRD), fluorescence, or other solid-phase and wet chemical techniques. While handheld XRD and fluorescence instruments can aid in field identification, handheld infrared reflectance spectrometers can also be used in industrial or field environments, with rapid, non-destructive identification possible via spectral analysis of the solid’s reflectance spectrum. We have recently developed standard laboratory measurement methods for the infrared (IR) reflectance of solids and have investigated using these techniques for the identification of uranium-bearing minerals, using XRD methods for ground-truth. Due to the rich colors of such species,more » including distinctive spectroscopic signatures in the infrared, identification is facile and specific, both for samples that are pure or are partially composed of uranium (e.g. boltwoodite, schoepite, tyuyamunite, carnotite, etc.) or non-uranium minerals. The method can be used to detect not only pure and partial minerals, but is quite sensitive to chemical change such as hydration (e.g. schoepite). We have further applied statistical methods, in particular classical least squares (CLS) and multivariate curve resolution (MCR) for discrimination of such uranium minerals and two uranium pure chemicals (U3O8 and UO2) against common background materials (e.g. silica sand, asphalt, calcite, K-feldspar) with good success. Each mineral contains unique infrared spectral features; some of the IR features are similar or common to entire classes of minerals, typically arising from similar chemical moieties or functional groups in the minerals: phosphates, sulfates, carbonates, etc. These characteristic 2 infrared bands generate the unique (or class-specific) bands that distinguish the mineral from the interferents or backgrounds. We have observed several cases where the chemical moieties that provide the spectral discrimination in the longwave IR do so by generating upward-going reststrahlen bands in the reflectance data, but the same minerals have other weaker (overtone) bands, sometimes from the same chemical groups, that are manifest as downward-going transmission-type features in the midwave and shortwave infrared.« less
Verma, Dharmendra; Kapadia, Asha; Adler, Douglas G
2007-08-01
Endoscopic biliary sphincterotomy (ES) can cause bleeding, pancreatitis, and perforation. This has, in part, been attributed to the type of electrosurgical current used for ES. No consensus exists on the optimal type of electrosurgical current for ES to maximize safety. To compare the rates of complications in patients undergoing ES via pure current versus mixed current. A systematic review of published, prospective, randomized trials that compared pure current with mixed current for ES. Patients undergoing ES, with random assignment to either current group. Data were standardized for pancreatitis and postsphincterotomy bleeding. There were insufficient data to analyze perforation risk. A random-effects model was used. Bleeding, pancreatitis, and perforation. A total of 804 patients from 4 trials that compared pure current to mixed current were analyzed. The aggregated rate of pancreatitis was 3.8%, 95% confidence interval (CI) 1.0%-6.6%, for the pure-current group versus 7.9%, 95% CI 3.1%-12.7%, for the mixed-current group; the difference was not statistically significant. The rate of bleeding (all severity groups) for the pure-current group was 37.3% (95% CI 27.3%, 47.3%), which was significantly higher than that of the mixed-current group (12.2% [95% CI 4.1%, 20.3%]). Mild bleeding was significantly more frequent with pure current (28.9% [95% CI 16.3, 41.4]) compared with mixed current (9.4% [95% CI 2.1%, 16.8%]). Variables, including endoscopist skill and cannulation difficulty, were difficult to measure. The rate of pancreatitis in patients who underwent ES when using pure current was not significantly different from those when using mixed current. Pure current was associated with more episodes of bleeding, primarily mild bleeding. Data were insufficient to analyze the perforation risk.
Fracture behaviors under pure shear loading in bulk metallic glasses
NASA Astrophysics Data System (ADS)
Chen, Cen; Gao, Meng; Wang, Chao; Wang, Wei-Hua; Wang, Tzu-Chiang
2016-12-01
Pure shear fracture test, as a special mechanical means, had been carried out extensively to obtain the critical information for traditional metallic crystalline materials and rocks, such as the intrinsic deformation behavior and fracture mechanism. However, for bulk metallic glasses (BMGs), the pure shear fracture behaviors have not been investigated systematically due to the lack of a suitable test method. Here, we specially introduce a unique antisymmetrical four-point bend shear test method to realize a uniform pure shear stress field and study the pure shear fracture behaviors of two kinds of BMGs, Zr-based and La-based BMGs. All kinds of fracture behaviors, the pure shear fracture strength, fracture angle and fracture surface morphology, are systematically analyzed and compared with those of the conventional compressive and tensile fracture. Our results indicate that both the Zr-based and La-based BMGs follow the same fracture mechanism under pure shear loading, which is significantly different from the situation of some previous research results. Our results might offer new enlightenment on the intrinsic deformation and fracture mechanism of BMGs and other amorphous materials.
Fracture behaviors under pure shear loading in bulk metallic glasses.
Chen, Cen; Gao, Meng; Wang, Chao; Wang, Wei-Hua; Wang, Tzu-Chiang
2016-12-23
Pure shear fracture test, as a special mechanical means, had been carried out extensively to obtain the critical information for traditional metallic crystalline materials and rocks, such as the intrinsic deformation behavior and fracture mechanism. However, for bulk metallic glasses (BMGs), the pure shear fracture behaviors have not been investigated systematically due to the lack of a suitable test method. Here, we specially introduce a unique antisymmetrical four-point bend shear test method to realize a uniform pure shear stress field and study the pure shear fracture behaviors of two kinds of BMGs, Zr-based and La-based BMGs. All kinds of fracture behaviors, the pure shear fracture strength, fracture angle and fracture surface morphology, are systematically analyzed and compared with those of the conventional compressive and tensile fracture. Our results indicate that both the Zr-based and La-based BMGs follow the same fracture mechanism under pure shear loading, which is significantly different from the situation of some previous research results. Our results might offer new enlightenment on the intrinsic deformation and fracture mechanism of BMGs and other amorphous materials.
[Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].
Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin
2018-04-01
Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.
NASA Astrophysics Data System (ADS)
Bouhaj, M.; von Estorff, O.; Peiffer, A.
2017-09-01
In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.
Kapoor, Vishal; Glover, Rebecca; Malviya, Manoj N
2015-12-02
The pure soybean oil based lipid emulsions (S-LE) conventionally used for parenteral nutrition (PN) in preterm infants have high polyunsaturated fatty acid (PUFA) content. The newer lipid emulsions (LE) from alternative lipid sources with reduced PUFA content may improve clinical outcomes in preterm infants. To determine the safety and efficacy of the newer alternative LE compared with the conventional S-LE for PN in preterm infants. We used the standard search strategy of the Cochrane Neonatal Review Group (CNRG) to search the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 7), MEDLINE (1946 to 31 July 2015), EMBASE (1947 to 31 July 2015), CINAHL (1982 to 31 July 2015), Web of Science (31 July 2015), conference proceedings, trial registries (clinicaltrials.gov, controlled-trials.com, WHO's ICTRP), and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials. Randomised or quasi-randomised controlled trials in preterm infants (< 37 weeks), comparing newer alternative LE with S-LE. Data collection and analysis conformed to the methods of the CNRG. We assessed the quality of evidence for important outcomes using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach, in addition to reporting the conventional statistical significance of results. Fifteen studies (N = 979 infants) are included in this review. Alternative LE including medium chain triglycerides/long chain triglycerides (MCT/LCT) LE (3 studies; n = 108), MCT-olive-fish-soy oil-LE (MOFS-LE; 7 studies; n = 469), MCT-fish-soy oil-LE (MFS-LE; 1 study; n = 60), olive-soy oil-LE (OS-LE; 7 studies; n = 406), and borage-soy oil-LE (BS-LE; 1 study; n = 34) were compared with S-LE. The different LE were also considered together to compare 'all fish oil containing-LE' versus S-LE (7 studies; n = 499) and 'all alternative LE' versus S-LE (15 studies; n = 979). Some studies had multiple intervention arms and were included in more than one comparison. No study compared pure fish oil-LE or structured-LE to S-LE.The GRADE quality of evidence (GRADE QoE) ranged from 'low' to 'very low.' Evidence came mostly from small single centre studies, many focusing on biochemical aspects as their primary outcomes, with optimal information size not achieved for the important clinical outcomes in any comparison.In the primary outcomes of the review there was a pooled effect towards decreased bronchopulmonary dysplasia (BPD) in OS-LE vs S-LE (4 studies, n = 261) not reaching statistical significance (typical risk ratio (RR) 0.69, 95% confidence interval (CI) 0.46 to 1.04, I² = 32%; typical risk difference (RD) -0.08, 95% CI -0.17 to 0.00, I² = 76%; GRADE QoE: 'very low'). No difference in BPD was observed in any other comparison. There were no statistically significant differences in the primary outcomes of death, growth rate (g/kg/day) or days to regain birth weight in any comparison.Retinopathy of prematurity (ROP) stage 1-2 was reported to be statistically significantly lower in one single centre study (n = 80) in the MOFS-LE group compared with the S-LE group (1/40 vs 12/40, respectively; RR 0.08, 95% CI 0.01 to 0.61; RD -0.27, 95% CI -0.43 to -0.12; number needed to benefit (NNTB) 4, 95% CI 2 to 8). However there were no statistically significant differences in the secondary outcome of ROP ≥ stage 3 in any of the individual studies or in any comparison (GRADE QoE: 'low' to 'very low'). No other study reported on ROP stages 1 and 2 separately.There were no statistically significant differences in the secondary outcomes of sepsis, PN associated liver disease (PNALD)/cholestasis, ventilation duration, necrotising enterocolitis (NEC) ≥ stage 2, jaundice requiring treatment, intraventricular haemorrhage grade III-IV, periventricular leukomalacia (PVL), patent ductus arteriosus (PDA), hypertriglyceridaemia, and hyperglycaemia in any comparison.No study reported on neurodevelopmental outcomes or essential fatty acid deficiency. All lipid emulsions in this review appeared to be safe and were well tolerated in preterm infants. Compared with the pure soy oil based LE, use of MOFS-LE was associated with a decrease in the early stages (1-2) of ROP in one study. However there were no statistically significant differences in clinically important outcomes including death, growth, BPD, sepsis, ROP ≥ stage 3, and PNALD with the use of newer alternative LE versus the conventional pure soy oil based LE (GRADE QoE ranged from 'low' to 'very low'). Currently there is insufficient evidence to recommend any alternative LE over S-LE or vice versa in preterm infants.Larger randomised studies focusing on important clinical outcomes, targeting specific 'at risk' population subgroups (e.g. extreme prematurity, long term PN, etc), and exploring the effect of different proportions of lipid constituents are required to evaluate the effectiveness of newer lipid emulsions compared with the conventional pure soy based LE in preterm infants.
CMB EB and TB cross-spectrum estimation via pseudospectrum techniques
NASA Astrophysics Data System (ADS)
Grain, J.; Tristram, M.; Stompor, R.
2012-10-01
We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.
Understanding Uncertainties and Biases in Jet Quenching in High-Energy Nucleus-Nucleus Collisions
NASA Astrophysics Data System (ADS)
Heinz, Matthias
2017-09-01
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momenta are reduced by the same fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model from which to sample particle types and a 3-dimensional Boltzmann distribution from which to sample particle momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested first on pure jet events and later on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael
2013-12-23
ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.
Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund
2016-02-18
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.
Vibrational properties of TaW alloy using modified embedded atom method potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chand, Manesh, E-mail: maneshchand@gmail.com; Uniyal, Shweta; Joshi, Subodh
2016-05-06
Force-constants up to second neighbours of pure transition metal Ta and TaW alloy are determined using the modified embedded atom method (MEAM) potential. The obtained force-constants are used to calculate the phonon dispersion of pure Ta and TaW alloy. As a further application of MEAM potential, the force-constants are used to calculate the local vibrational density of states and mean square thermal displacements of pure Ta and W impurity atoms with Green’s function method. The calculated results are found to be in agreement with the experimental measurements.
Mortality and employment after in-patient opiate detoxification.
Naderi-Heiden, A; Gleiss, A; Bäcker, C; Bieber, D; Nassan-Agha, H; Kasper, S; Frey, R
2012-05-01
We considered that completed opiate detoxification resulted in increased life expectancy and earning capacity as compared to non-completed detoxification. The cohort study sample included pure opioid or poly-substance addicts admitted for voluntary in-patient detoxification between 1997 and 2004. Of 404 patients, 58.7% completed the detoxification program and 41.3% did not. The Austrian Social Security Institution supplied data on survival and employment records for every single day in the individual observation period between discharge and December 2007. Statistical analyses included the calculation of standardized mortality rates for the follow-up period of up to 11 years. The standardized mortality ratios (SMRs) were between 13.5 and 17.9 during the first five years after discharge, thereafter they fell clearly with time. Mortality did not differ statistically significantly between completers and non-completers. The median employment rate was insignificantly higher in completers (12.0%) than in non-completers (5.5%). The odds for being employed were higher in pure opioid addicts than in poly-substance addicts (p=0.003). The assumption that completers of detoxification treatment have a better outcome than non-completers has not been confirmed. The decrease in mortality with time elapsed since detoxification is interesting. Pure opioid addicts had better employment prospects than poly-substance addicts. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Goyal, Sandeep K.; Singh, Rajeev; Ghosh, Sibasish
2016-01-01
Mixed states of a quantum system, represented by density operators, can be decomposed as a statistical mixture of pure states in a number of ways where each decomposition can be viewed as a different preparation recipe. However the fact that the density matrix contains full information about the ensemble makes it impossible to estimate the preparation basis for the quantum system. Here we present a measurement scheme to (seemingly) improve the performance of unsharp measurements. We argue that in some situations this scheme is capable of providing statistics from a single copy of the quantum system, thus making it possible to perform state tomography from a single copy. One of the by-products of the scheme is a way to distinguish between different preparation methods used to prepare the state of the quantum system. However, our numerical simulations disagree with our intuitive predictions. We show that a counterintuitive property of a biased classical random walk is responsible for the proposed mechanism not working.
Willems, Sander; Fraiture, Marie-Alice; Deforce, Dieter; De Keersmaecker, Sigrid C J; De Loose, Marc; Ruttink, Tom; Herman, Philippe; Van Nieuwerburgh, Filip; Roosens, Nancy
2016-02-01
Because the number and diversity of genetically modified (GM) crops has significantly increased, their analysis based on real-time PCR (qPCR) methods is becoming increasingly complex and laborious. While several pioneers already investigated Next Generation Sequencing (NGS) as an alternative to qPCR, its practical use has not been assessed for routine analysis. In this study a statistical framework was developed to predict the number of NGS reads needed to detect transgene sequences, to prove their integration into the host genome and to identify the specific transgene event in a sample with known composition. This framework was validated by applying it to experimental data from food matrices composed of pure GM rice, processed GM rice (noodles) or a 10% GM/non-GM rice mixture, revealing some influential factors. Finally, feasibility of NGS for routine analysis of GM crops was investigated by applying the framework to samples commonly encountered in routine analysis of GM crops. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Dynamics of statistical distance: Quantum limits for two-level clocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braunstein, S.L.; Milburn, G.J.
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
NASA Astrophysics Data System (ADS)
Bouteiller, Paul; Terrier, Marie-France; Tobaly, Pascal
2017-02-01
The aim of this work is to study heat pump cycles, using CO2 based mixtures as working fluids. Since adding other chemicals to CO2 moves the critical point and generally equilibrium lines, it is expected that lower operating pressures as well as higher global efficiencies may be reached. A simple stage pure CO2 cycle is used as reference, with fixed external conditions. Two scenarios are considered: water is heated from 10 °C to 65 °C for Domestic Hot Water scenario and from 30 °C to 35 °C for Central Heating scenario. In both cases, water at the evaporator inlet is set at 7 °C to account for such outdoor temperature conditions. In order to understand the dynamic behaviour of thermodynamic cycles with mixtures, it is essential to measure the fluid circulating composition. To this end, we have developed a non intrusive method. Online optical flow cells allow the recording of infrared spectra by means of a Fourier Transform Infra Red spectrometer. A careful calibration is performed by measuring a statistically significant number of spectra for samples of known composition. Then, a statistical model is constructed to relate spectra to compositions. After calibration, compositions are obtained by recording the spectrum in few seconds, thus allowing for a dynamic analysis. This article will describe the experimental setup and the composition measurement techniques. Then a first account of results with pure CO2, and with the addition of propane or R-1234yf will be given.
Fordyce, James A
2010-07-23
Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2007-12-01
We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).
McDermott, Jason E.; Wang, Jing; Mitchell, Hugh; Webb-Robertson, Bobbie-Jo; Hafen, Ryan; Ramey, John; Rodland, Karin D.
2012-01-01
Introduction The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful molecular signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities for more sophisticated approaches to integrating purely statistical and expert knowledge-based approaches. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges that have been encountered in deriving valid and useful signatures of disease. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to identify predictive signatures of disease are key to future success in the biomarker field. We will describe our recommendations for possible approaches to this problem including metrics for the evaluation of biomarkers. PMID:23335946
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Wang, Jing; Mitchell, Hugh D.
2013-01-01
The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities both for purely statistical and expert knowledge-based approaches and would benefit from improved integration of the two. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges thatmore » have been encountered. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to biomarker discovery and characterization are key to future success in the biomarker field. We will describe our recommendations of possible approaches to this problem including metrics for the evaluation of biomarkers.« less
Roberts, Kurt E; Solomon, Daniel; Mirensky, Tamar; Silasi, Dan-Arin; Duffy, Andrew J; Rutherford, Tom; Longo, Walter E; Bell, Robert L
2012-02-01
This report describes the first cohort study comparing pure transvaginal appendectomies (TVAs) to traditional 3-port laparoscopic appendectomies (LAs). Between August 2008 and August 2010, 42 patients were offered a pure TVA. Patients who did not wish to undergo a TVA underwent a LA and served as the control group. Demographic data, operative time, length of stay, patient controlled analgesia (PCA) 12-hour-morphine utilization, complications, return to normal activity, and return to work were recorded. Eighteen of 40 enrolled patients underwent a pure TVA. Two patients refused to participate in this study. Mean age (TVA: 31.3 ± 2.5 years vs. LA: 28.2 ± 2.3 years, P = 0.36), mean body mass index (TVA: 23.7 ± 1.2 kg/m2 vs. LA: 23.6 ± 0.7 kg/m2, P = 0.96) mean operative time (TVA: 44.4 ± 4.5 minutes vs. LA: 39.8 ± 2.6 minutes, P = 0.38), and mean length of hospital stay (TVA: 1.1 ± 0.1 days vs. LA: 1.2 ± 0.1 days, P = 0.53) were not statistically significant. However, mean postoperative morphine-use (TVA: 8.7 ± 2.0 mg vs. LA: 23.0 ± 3.4 mg, P < 0.01), return to normal activity (TVA: 3.3 ± 0.4 days vs. LA: 9.7 ± 1.6 days, P < 0.01), and return to work (TVA: 5.4 ± 1.1 days vs. LA: 10.7 ± 1.5 days, P = 0.01) were statistically significant. One conversion in the TVA group to a LA was necessary because of inability to maintain adequate pneumoperitoneum. Four complications were observed: 1 intraabdominal abscess and 1 case of urinary retention in the TVA group; 1 early postoperative bowel obstruction and 1 case of urinary retention in the LA group. Pure TVA is a safe and well-tolerated procedure with significantly less pain and faster recovery compared to traditional LA.
Jointly learning word embeddings using a corpus and a knowledge base
Bollegala, Danushka; Maehara, Takanori; Kawarabayashi, Ken-ichi
2018-01-01
Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structure between words in co-occurring contexts. These beneficial semantic relational structures are contained in manually-created knowledge bases (KBs) such as ontologies and semantic lexicons, where the meanings of words are represented by defining the various relationships that exist among those words. We combine the knowledge in both a corpus and a KB to learn better word embeddings. Specifically, we propose a joint word representation learning method that uses the knowledge in the KBs, and simultaneously predicts the co-occurrences of two words in a corpus context. In particular, we use the corpus to define our objective function subject to the relational constrains derived from the KB. We further utilise the corpus co-occurrence statistics to propose two novel approaches, Nearest Neighbour Expansion (NNE) and Hedged Nearest Neighbour Expansion (HNE), that dynamically expand the KB and therefore derive more constraints that guide the optimisation process. Our experimental results over a wide-range of benchmark tasks demonstrate that the proposed method statistically significantly improves the accuracy of the word embeddings learnt. It outperforms a corpus-only baseline and reports an improvement of a number of previously proposed methods that incorporate corpora and KBs in both semantic similarity prediction and word analogy detection tasks. PMID:29529052
NASA Astrophysics Data System (ADS)
Kowalczyk, Donna Lee
The purpose of this study was to examine K--5 elementary teachers' reported beliefs about the use, function, and importance of Direct Instruction, the Discovery Method, and the Inquiry Method in the instruction of science in their classrooms. Eighty-two teachers completed questionnaires about their beliefs, opinions, uses, and ideas about each of the three instructional methods. Data were collected and analyzed using the Statistical Package of the Social Sciences (SPSS). Descriptive statistics and Chi-Square analyses indicated that the majority of teachers reported using all three methods to varying degrees in their classrooms. Guided Discovery was reported by the teachers as being the most frequently used method to teach science, while Pure Discovery was reportedly used the least frequently. The majority of teachers expressed the belief that a blend of all three instructional methods is the most effective strategy for teaching science at the elementary level. The teachers also reported a moderate level of confidence in teaching science. Students' ability levels, learning styles, and time/class schedule were identified as factors that most influence teachers' instructional choice. Student participation in hands-on activities, creative thinking ability, and developing an understanding of scientific concepts were reported as the learning behaviors most associated with student success in science. Data obtained from this study provide information about the nature and uses of Direct Instruction, the Discovery Method, and the Inquiry Method and teachers' perceptions and beliefs about each method's use in science education. Learning more about the science teaching and learning environment may help teachers, administrators, curriculum developers, and researchers gain greater insights about student learning, instructional effectiveness, and science curriculum development at the elementary level.
Vortex methods and vortex statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chorin, A.J.
Vortex methods originated from the observation that in incompressible, inviscid, isentropic flow vorticity (or, more accurately, circulation) is a conserved quantity, as can be readily deduced from the absence of tangential stresses. Thus if the vorticity is known at time t = 0, one can deduce the flow at a later time by simply following it around. In this narrow context, a vortex method is a numerical method that makes use of this observation. Even more generally, the analysis of vortex methods leads, to problems that are closely related to problems in quantum physics and field theory, as well asmore » in harmonic analysis. A broad enough definition of vortex methods ends up by encompassing much of science. Even the purely computational aspects of vortex methods encompass a range of ideas for which vorticity may not be the best unifying theme. The author restricts himself in these lectures to a special class of numerical vortex methods, those that are based on a Lagrangian transport of vorticity in hydrodynamics by smoothed particles (``blobs``) and those whose understanding contributes to the understanding of blob methods. Vortex methods for inviscid flow lead to systems of ordinary differential equations that can be readily clothed in Hamiltonian form, both in three and two space dimensions, and they can preserve exactly a number of invariants of the Euler equations, including topological invariants. Their viscous versions resemble Langevin equations. As a result, they provide a very useful cartoon of statistical hydrodynamics, i.e., of turbulence, one that can to some extent be analyzed analytically and more importantly, explored numerically, with important implications also for superfluids, superconductors, and even polymers. In the authors view, vortex ``blob`` methods provide the most promising path to the understanding of these phenomena.« less
Siqueira, J F; Magalhães, F A; Lima, K C; de Uzeda, M
1998-12-01
The pathogenicity of obligate and facultative anaerobic bacteria commonly found in endodontic infections was tested using a mouse model. The capacity of inducing abscesses was evaluated seven days after subcutaneous injection of the bacteria in pure culture and in combinations with either Prevotella intermedia or Prevotella nigrescens. Nine of the fifteen bacterial strains tested were pathogenic in pure culture. No statistically significant differences were detected between these strains in pure culture and in mixtures with either P. intermedia or P. nigrescens. Synergism between the bacterial strains was only apparent when associating Porphyromonas endodontalis with P. intermedia or P. nigrescens. Histopathological examination of tissue sections from induced abscesses revealed an acute inflammatory reaction, dominated by polymorphonuclear leukocytes. Sections from the control group using sterile medium showed no evidence of inflammatory reaction.
Trisi, Paolo; Rao, Walter; Rebaudi, Alberto; Fiore, Peter
2003-02-01
The effect of the pure-phase beta-tricalcium phosphate (beta-TCP) Cerasorb on bone regeneration was evaluated in hollow titanium cylinders implanted in the posterior jaws of five volunteers. Beta-TCP particles were inserted inside the cylinders and harvested 6 months after placement. The density of the newly formed bone inside the bone-growing chambers measured 27.84% +/- 24.67% in test and 17.90% +/- 4.28% in control subjects, without a statistically significant difference. Analysis of the histologic specimens revealed that the density of the regenerated bone was related to the density of the surrounding bone. The present study demonstrates the spontaneous healing of infrabony artificial defects, 2.5 mm diameter, in the jaw. The pure beta-TCP was resorbed simultaneously with new bone formation, without interference with the bone matrix formation.
Comparative study of two commercially pure titanium casting methods
RODRIGUES, Renata Cristina Silveira; FARIA, Adriana Claudia Lapria; ORSI, Iara Augusta; de MATTOS, Maria da Gloria Chiarello; MACEDO, Ana Paula; RIBEIRO, Ricardo Faria
2010-01-01
The interest in using titanium to fabricate removable partial denture (RPD) frameworks has increased, but there are few studies evaluating the effects of casting methods on clasp behavior. Objective This study compared the occurrence of porosities and the retentive force of commercially pure titanium (CP Ti) and cobalt-chromium (Co-Cr) removable partial denture circumferential clasps cast by induction/centrifugation and plasma/vacuum-pressure. Material and Methods 72 frameworks were cast from CP Ti (n=36) and Co-Cr alloy (n=36; control group). For each material, 18 frameworks were casted by electromagnetic induction and injected by centrifugation, whereas the other 18 were casted by plasma and injected by vacuum-pressure. For each casting method, three subgroups (n=6) were formed: 0.25 mm, 0.50 mm, and 0.75 mm undercuts. The specimens were radiographed and subjected to an insertion/removal test simulating 5 years of framework use. Data were analyzed by ANOVA and Tukey's to compare materials and cast methods (α=0.05). Results Three of 18 specimens of the induction/centrifugation group and 9 of 18 specimens of plasma/vacuum-pressure cast presented porosities, but only 1 and 7 specimens, respectively, were rejected for simulation test. For Co-Cr alloy, no defects were found. Comparing the casting methods, statistically significant differences (p<0.05) were observed only for the Co-Cr alloy with 0.25 mm and 0.50 mm undercuts. Significant differences were found for the 0.25 mm and 0.75 mm undercuts dependent on the material used. For the 0.50 mm undercut, significant differences were found when the materials were induction casted. Conclusion Although both casting methods produced satisfactory CP Ti RPD frameworks, the occurrence of porosities was greater in the plasma/vacuum-pressure than in the induction/centrifugation method, the latter resulting in higher clasp rigidity, generating higher retention force values. PMID:21085805
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Complexity quantification of dense array EEG using sample entropy analysis.
Ramanand, Pravitha; Nampoori, V P N; Sreenivasan, R
2004-09-01
In this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.
Flexural strength of pure Ti, Ni-Cr and Co-Cr alloys submitted to Nd:YAG laser or TIG welding.
Rocha, Rick; Pinheiro, Antônio Luiz Barbosa; Villaverde, Antonio Balbin
2006-01-01
Welding of metals and alloys is important to Dentistry for fabrication of dental prostheses. Several methods of soldering metals and alloys are currently used. The purpose of this study was to assess, using the flexural strength testing, the efficacy of two processes Nd:YAG laser and TIG (tungsten inert gas) for welding of pure Ti, Co-Cr and Ni-Cr alloys. Sixty cylindrical specimens were prepared (20 of each material), bisected and welded using different techniques. Four groups were formed (n=15). I: Nd:YAG laser welding; II- Nd:YAG laser welding using a filling material; III- TIG welding and IV (control): no welding (intact specimens). The specimens were tested in flexural strength and the results were analyzed statistically by one-way ANOVA. There was significant differences (p<0.001) among the non-welded materials, the Co-Cr alloy being the most resistant to deflection. Comparing the welding processes, significant differences (p<0.001) where found between TIG and laser welding and also between laser alone and laser plus filling material. In conclusion, TIG welding yielded higher flexural strength means than Nd:YAG laser welding for the tested Ti, Co-Cr and Ni-Cr alloys.
NASA Astrophysics Data System (ADS)
Herlach, Dieter M.; Kobold, Raphael; Klein, Stefan
2018-03-01
Glass formation of a liquid undercooled below its melting temperature requires the complete avoidance of crystal nucleation and subsequent crystal growth. Even though they are not part of the glass formation process, a detailed knowledge of both processes involved in crystallization is mandatory to determine the glass-forming ability of metals and metallic alloys. In the present work, methods of containerless processing of drops by electrostatic and electromagnetic levitation are applied to undercool metallic melts prior to solidification. Heterogeneous nucleation on crucible walls is completely avoided giving access to large undercoolings. A freely suspended drop offers the additional benefit of showing the rapid crystallization process of an undercooled melt in situ by proper diagnostic means. As a reference, crystal nucleation and dendrite growth in the undercooled melt of pure Zr are experimentally investigated. Equivalently, binary Zr-Cu, Zr-Ni and Zr-Pd and ternary Zr-Ni-Cu alloys are studied, whose glass-forming abilities differ. The experimental results are analyzed within classical nucleation theory and models of dendrite growth. The findings give detailed knowledge about the nucleation-undercooling statistics and the growth kinetics over a large range of undercooling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, J.M.; Callahan, C.A.; Cline, J.F.
Bioassays were used in a three-phase research project to assess the comparative sensitivity of test organisms to known chemicals, determine if the chemical components in field soil and water samples containing unknown contaminants could be inferred from our laboratory studies using known chemicals, and to investigate kriging (a relatively new statistical mapping technique) and bioassays as methods to define the areal extent of chemical contamination. The algal assay generally was most sensitive to samples of pure chemicals, soil elutriates and water from eight sites with known chemical contamination. Bioassays of nine samples of unknown chemical composition from the Rocky Mountainmore » Arsenal (RMA) site showed that a lettuce seed soil contact phytoassay was most sensitive. In general, our bioassays can be used to broadly identify toxic components of contaminated soil. Nearly pure compounds of insecticides and herbicides were less toxic in the sensitive bioassays than were the counterpart commercial formulations. This finding indicates that chemical analysis alone may fail to correctly rate the severity of environmental toxicity. Finally, we used the lettuce seed phytoassay and kriging techniques in a field study at RMA to demonstrate the feasibility of mapping contamination to aid in cleanup decisions. 25 references, 9 figures, 9 tables.« less
Xu, Ronghua; Wong, Wing-Keung; Chen, Guanrong; Huang, Shuo
2017-01-01
In this paper, we analyze the relationship among stock networks by focusing on the statistically reliable connectivity between financial time series, which accurately reflects the underlying pure stock structure. To do so, we firstly filter out the effect of market index on the correlations between paired stocks, and then take a t-test based P-threshold approach to lessening the complexity of the stock network based on the P values. We demonstrate the superiority of its performance in understanding network complexity by examining the Hong Kong stock market. By comparing with other filtering methods, we find that the P-threshold approach extracts purely and significantly correlated stock pairs, which reflect the well-defined hierarchical structure of the market. In analyzing the dynamic stock networks with fixed-size moving windows, our results show that three global financial crises, covered by the long-range time series, can be distinguishingly indicated from the network topological and evolutionary perspectives. In addition, we find that the assortativity coefficient can manifest the financial crises and therefore can serve as a good indicator of the financial market development. PMID:28145494
Validation of a pulsed electric field process to pasteurize strawberry puree
USDA-ARS?s Scientific Manuscript database
An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...
Minocycline encapsulated chitosan nanoparticles for central antinociceptive activity.
Nagpal, Kalpana; Singh, S K; Mishra, D N
2015-01-01
The purpose of the study is to explore the central anti-nociceptive activity of brain targeted nanoparticles (NP) of minocycline hydrochloride (MH). The NP were formulated using the modified ionotropic gelation method (MHNP) and were coated with Tween 80 (T80) to target them to brain (cMHNP). The formulated nanoparticles have already been characterized for particle size, zeta potential, drug entrapment efficiency and in vitro drug release. The nanoparticles were then evaluated for pharmacodynamic activity using thermal methods. The pure drug and the formulation, MHNP were not able to show a statistically significant central analgesic activity. cMHNP on the other hand evidenced a significant central analgesic activity. Animal models evidenced that brain targeted nanoparticles may be utilized for effective delivery of central anti-nociceptive effect of MH. Further clinical studies are required to explore the activity for mankind. Copyright © 2014 Elsevier B.V. All rights reserved.
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
The natural history of Halley's comet
NASA Astrophysics Data System (ADS)
McLaughlin, W. I.
1981-07-01
The 1986 apparition of Halley's comet will be the subject of numerous space probes, planned to determine the chemical nature and physical structure of comet nuclei, atmospheres, and ionospheres, as well as comet tails. The problems of cometary origin remain inconclusive, with theories ranging from a purely interstellar origin to their being ejecta from the Galilean satellites of Jupiter. Comets can be grouped into one of two classes, depending on their periodicity, and statistical mechanics of the entire Jovian family of comets can be examined under the equilibrium hypothesis. Comet anatomy estimations have been determined, and there is speculation that comet chemistry may have been a factor in the origin of life on earth. Halley's comet was first noted using Newton's dynamical methods, and Brady (1972) attempted to use the comet as a gravitational probe in search of a trans-Plutonian planet. Halley's orbit is calculated by combination of ancient observations and modern scientific methods.
Modelling spruce bark beetle infestation probability
Paulius Zolubas; Jose Negron; A. Steven Munson
2009-01-01
Spruce bark beetle (Ips typographus L.) risk model, based on pure Norway spruce (Picea abies Karst.) stand characteristics in experimental and control plots was developed using classification and regression tree statistical technique under endemic pest population density. The most significant variable in spruce bark beetle...
Kim, W; Kim, H; Citrome, L; Akiskal, H S; Goffin, K C; Miller, S; Holtzman, J N; Hooshmand, F; Wang, P W; Hill, S J; Ketter, T A
2016-09-01
Assess strengths and limitations of mixed bipolar depression definitions made more inclusive than that of the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) by requiring fewer than three 'non-overlapping' mood elevation symptoms (NOMES). Among bipolar disorder (BD) out-patients assessed with Systematic Treatment Enhancement Program for BD (STEP-BD) Affective Disorders Evaluation, we assessed prevalence, demographics, and clinical correlates of mixed vs. pure depression, using less inclusive (≥3 NOMES, DSM-5), more inclusive (≥2 NOMES), and most inclusive (≥1 NOMES) definitions. Among 153 depressed BD, compared to less inclusive DSM-5 threshold, our more and most inclusive thresholds, yielded approximately two- and five-fold higher mixed depression rates (7.2%, 15.0%, and 34.6% respectively), and important statistically significant clinical correlates for mixed compared to pure depression (e.g. more lifetime anxiety disorder comorbidity, more current irritability), which were not significant using the DSM-5 threshold. Further studies assessing strengths and limitations of more inclusive mixed depression definitions are warranted, including assessing the extent to which enhanced statistical power vs. other factors contributes to more vs. less inclusive mixed bipolar depression thresholds having more statistically significant clinical correlates, and whether 'overlapping' mood elevation symptoms should be counted. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sakamoto, Torao; Horiuchi, Akira; Nakayama, Yoshiko
2013-01-01
BACKGROUND: Endoscopic evaluation of swallowing (EES) is not commonly used by gastroenterologists to evaluate swallowing in patients with dysphagia. OBJECTIVE: To use transnasal endoscopy to identify factors predicting successful or failed swallowing of pureed foods in elderly patients with dysphagia. METHODS: EES of pureed foods was performed by a gastroenterologist using a small-calibre transnasal endoscope. Factors related to successful versus unsuccessful swallowing of pureed foods were analyzed with regard to age, comorbid diseases, swallowing activity, saliva pooling, vallecular residues, pharyngeal residues and airway penetration/aspiration. Unsuccessful swallowing was defined in patients who could not eat pureed foods at bedside during hospitalization. Logistic regression analysis was used to identify independent predictors of swallowing of pureed foods. RESULTS: During a six-year period, 458 consecutive patients (mean age 80 years [range 39 to 97 years]) were considered for the study, including 285 (62%) men. Saliva pooling, vallecular residues, pharyngeal residues and penetration/aspiration were found in 240 (52%), 73 (16%), 226 (49%) and 232 patients (51%), respectively. Overall, 247 patients (54%) failed to swallow pureed foods. Multivariate logistic regression analysis demonstrated that the presence of pharyngeal residues (OR 6.0) and saliva pooling (OR 4.6) occurred significantly more frequently in patients who failed to swallow pureed foods. CONCLUSIONS: Pharyngeal residues and saliva pooling predicted impaired swallowing of pureed foods. Transnasal EES performed by a gastroenterologist provided a unique bedside method of assessing the ability to swallow pureed foods in elderly patients with dysphagia. PMID:23936875
Ruzauskas, Modestas; Siugzdiniene, Rita; Klimiene, Irena; Virgailis, Marius; Mockeliunas, Raimundas; Vaskeviciute, Lina; Zienius, Dainius
2014-11-28
Among coagulase-negative staphylococci, Staphylococcus haemolyticus is the second most frequently isolated species from human blood cultures and has the highest level of antimicrobial resistance. This species has zoonotic character and is prevalent both in humans and animals. Recent studies have indicated that methicillin-resistant S. haemolyticus (MRSH) is one of the most frequent isolated Staphylococcus species among neonates in intensive care units. The aim of this study was to determine the presence of MRSH in different groups of companion animals and to characterize isolates according their antimicrobial resistance. Samples (n = 754) were collected from healthy and diseased dogs and cats, female dogs in pure-breed kennels, healthy horses, and kennel owners. Classical microbiological tests along with molecular testing including PCR and 16S rRNA sequencing were performed to identify MRSH. Clonality of the isolates was assessed by Pulsed Field Gel Electrophoresis using the SmaI restriction enzyme. Antimicrobial susceptibility testing was performed using the broth micro-dilution method. Detection of genes encoding antimicrobial resistance was performed by PCR. Statistical analysis was performed using the R Project of Statistical Computing, "R 1.8.1" package. From a total of 754 samples tested, 12 MRSH isolates were obtained. No MRSH were found in horses and cats. Eleven isolates were obtained from dogs and one from a kennel owner. Ten of the dog isolates were detected in pure-breed kennels. The isolates demonstrated the same clonality only within separate kennels.The most frequent resistances of MRSH isolates was demonstrated to benzylpenicillin (91.7%), erythromycin (91.7%), gentamicin (75.0%), tetracycline (66.7%), fluoroquinolones (41.7%) and co-trimoxazole (41.7%). One isolate was resistant to streptogramins. All isolates were susceptible to daptomycin, rifampin, linezolid and vancomycin. The clone isolated from the kennel owner and one of the dogs was resistant to beta-lactams, macrolides, gentamicin and tetracycline. Pure-breed kennels keeping 6 or more females were determined to be a risk factor for the presence of MRSH strains. MRSH isolated from companion animals were frequently resistant to some classes of critically important antimicrobials, although they remain susceptible to antibiotics used exclusively in human medicine.
The relative efficiency of Iranian's rural traffic police: a three-stage DEA model.
Rahimi, Habibollah; Soori, Hamid; Nazari, Seyed Saeed Hashemi; Motevalian, Seyed Abbas; Azar, Adel; Momeni, Eskandar; Javartani, Mehdi
2017-10-13
Road traffic Injuries (RTIs) as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran's rural traffic police efficiency. This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA) model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation) of damage, injury and fatal accidents were 247.7 (258.4), 184.9 (176.9), and 28.7 (19.5), respectively. Input averages were 5.9 (3.0) patrol teams, 0.5% (0.2) manpower proportions, 7.5 (2.9) patrol cars, 0.5 (1.3) motorcycles, 77,279.1 (46,794.7) penalties, 90.9 (2.8) cultural and educational activity score, 0.7 (2.4) speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Iran's traffic police has potential opportunity to reduce RTIs. Adjusting police performance with environmental conditions is necessary. Capability of DEA method in setting quantitative targets for every station induces motivation for managers to reduce RTIs. Repetition of this study is recommended, annually.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Microplate-based filter paper assay to measure total cellulase activity.
Xiao, Zhizhuang; Storms, Reginald; Tsang, Adrian
2004-12-30
The standard filter paper assay (FPA) published by the International Union of Pure and Applied Chemistry (IUPAC) is widely used to determine total cellulase activity. However, the IUPAC method is not suitable for the parallel analyses of large sample numbers. We describe here a microplate-based method for assaying large sample numbers. To achieve this, we reduced the enzymatic reaction volume to 60 microl from the 1.5 ml used in the IUPAC method. The modified 60-microl format FPA can be carried out in 96-well assay plates. Statistical analyses showed that the cellulase activities of commercial cellulases from Trichoderma reesei and Aspergillus species determined with our 60-microl format FPA were not significantly different from the activities measured with the standard FPA. Our results also indicate that the 60-microl format FPA is quantitative and highly reproducible. Moreover, the addition of excess beta-glucosidase increased the sensitivity of the assay by up to 60%. 2004 Wiley Periodicals, Inc.
Improvements of self-assembly properties via homopolymer addition or block-copolymer blends
NASA Astrophysics Data System (ADS)
Chevalier, X.; Nicolet, C.; Tiron, R.; Gharbi, Ahmed; Argoud, M.; Couderc, C.; Fleury, Guillaume; Hadziioannou, G.; Iliopoulos, I.; Navarro, C.
2014-03-01
The properties of cylindrical poly(styrene-b-methylmethacrylate) (PS-b-PMMA) BCPs self-assembly in thinfilms are studied when the pure BCPs are blended either with a homopolymer or with another cylindrical PS-b-PMMA based BCP. For both of these approaches, we show that the period of the self-assembled features can be easily tuned and controlled, and that the final material presents interesting characteristics, such as the possibility to achieve thicker defects-free films, as compared to pure block-copolymers having the same period. Moreover, a statistical defectivity study based on a Delaunay triangulation and Voronoi analysis of the self-assemblies made with the different blends is described, and prove that despite their high value of polydispersity index, these blends exhibit also improved selfassembly properties (bigger monocrystalline arrangements and enhanced kinetics of defects annihilation) as compared to pure and monodisperse block-copolymers. Finally, the behavior of the blends is also compared to the ones their pure counter-part in templated approach like the contact-hole shrink to evaluate their respective process-window and response toward this physical constrain for lithographic applications.
Pasteurization of strawberry puree using a pilot plant pulsed electric fields (PEF) system
USDA-ARS?s Scientific Manuscript database
The processing of strawberry puree by pulsed electric fields (PEF) in a pilot plant system has never been evaluated. In addition, a method does not exist to validate the exact number and shape of the pulses applied during PEF processing. Both buffered peptone water (BPW) and fresh strawberry puree (...
Reference value sensitivity of measures of unfair health inequality
García-Gómez, Pilar; Schokkaert, Erik; Van Ourti, Tom
2014-01-01
Most politicians and ethical observers are not interested in pure health inequalities, as they want to distinguish between different causes of health differences. Measures of “unfair” inequality - direct unfairness and the fairness gap, but also the popular standardized concentration index - therefore neutralize the effects of what are considered to be “legitimate” causes of inequality. This neutralization is performed by putting a subset of the explanatory variables at reference values, e.g. their means. We analyze how the inequality ranking of different policies depends on the specific choice of reference values. We show with mortality data from the Netherlands that the problem is empirically relevant and we suggest a statistical method for fixing the reference values. PMID:24954998
Carbognin, Luisa; Sperduti, Isabella; Brunelli, Matteo; Marcolini, Lisa; Nortilli, Rolando; Pilotto, Sara; Zampiva, Ilaria; Merler, Sara; Fiorio, Elena; Filippi, Elisa; Manfrin, Erminia; Pellini, Francesca; Bonetti, Franco; Pollini, Giovanni Paolo; Tortora, Giampaolo; Bria, Emilio
2016-03-22
The aim of this analysis was to investigate the potential impact of Ki67 assay in a series of patients affected by early stage invasive lobular carcinoma (ILC) undergone surgery. Clinical-pathological data were correlated with disease-free and overall survival (DFS/OS). The maximally selected Log-Rank statistics analysis was applied to the Ki67 continuous variable to estimate appropriate cut-offs. The Subpopulation Treatment Effect Pattern Plot (STEPP) analysis was performed to assess the interaction between 'pure' or 'mixed' histology ILC and Ki67. At a median follow-up of 67 months, 10-years DFS and OS of 405 patients were 67.8 and 79.8%, respectively. Standardized Log-Rank statistics identified 2 optimal cut-offs (6 and 21%); 10-years DFS and OS were 75.1, 66.5, and 30.2% (p = 0.01) and 84.3, 76.4 and 59% (p = 0.003), for patients with a Ki67 < 6%, between 6 and 21%, and >21%, respectively. Ki67 and lymph-node status were independent predictor for longer DFS and OS at the multivariate analysis, with radiotherapy (for DFS) and age (for OS). Ki67 highly replicated at the internal cross-validation analysis (DFS 85%, OS 100%). The STEPP analysis showed that DFS rate decreases as Ki67 increases and those patients with 'pure' ILC performed worse than 'mixed' histology. Despite the retrospective and exploratory nature of the study, Ki67 was able to significantly discriminate the prognosis of patients with ILC, and the effect was more pronounced for patients with 'pure' ILC.
Sharma, Rama; Reddy, Vamsi Krishna L; Prashant, GM; Ojha, Vivek; Kumar, Naveen PG
2014-01-01
Context: Several studies have demonstrated the activity of natural plants on the dental biofilm and caries development. But few studies on the antimicrobial activity of coffee-based solutions were found in the literature. Further there was no study available to check the antimicrobial effect of coffee solutions with different percentages of chicory in it. Aims: To evaluate the antimicrobial activity of different combinations of coffee-chicory solutions and their anti-adherence effect on Streptococcus mutans to glass surface. Materials and Methods: Test solutions were prepared. For antimicrobial activity testing, tubes containing test solution and culture medium were inoculated with a suspension of S. mutans followed by plating on Brain Heart Infusion (BHI) agar. S. mutans adherence to glass in presence of the different test solutions was also tested. The number of adhered bacteria (CFU/mL) was determined by plating method. Statistical Analysis: Statistical significance was measured using one way ANOVA followed by Tukey's post hoc test. P value < 0.05 was considered statistically significant. Results: Pure chicory had shown significantly less bacterial count compared to all other groups. Groups IV and V had shown significant reduction in bacterial counts over the period of 4 hrs. Regarding anti-adherence effect, group I-IV had shown significantly less adherence of bacteria to glass surface. Conclusions: Chicory exerted antibacterial effect against S. mutans while coffee reduced significantly the adherence of S. mutans to the glass surface. PMID:25328299
Nonlinear projection methods for visualizing Barcode data and application on two data sets.
Olteanu, Madalina; Nicolas, Violaine; Schaeffer, Brigitte; Denys, Christiane; Missoup, Alain-Didier; Kennis, Jan; Larédo, Catherine
2013-11-01
Developing tools for visualizing DNA sequences is an important issue in the Barcoding context. Visualizing Barcode data can be put in a purely statistical context, unsupervised learning. Clustering methods combined with projection methods have two closely linked objectives, visualizing and finding structure in the data. Multidimensional scaling (MDS) and Self-organizing maps (SOM) are unsupervised statistical tools for data visualization. Both algorithms map data onto a lower dimensional manifold: MDS looks for a projection that best preserves pairwise distances while SOM preserves the topology of the data. Both algorithms were initially developed for Euclidean data and the conditions necessary to their good implementation were not satisfied for Barcode data. We developed a workflow consisting in four steps: collapse data into distinct sequences; compute a dissimilarity matrix; run a modified version of SOM for dissimilarity matrices to structure the data and reduce dimensionality; project the results using MDS. This methodology was applied to Astraptes fulgerator and Hylomyscus, an African rodent with debated taxonomy. We obtained very good results for both data sets. The results were robust against unbalanced species. All the species in Astraptes were well displayed in very distinct groups in the various visualizations, except for LOHAMP and FABOV that were mixed up. For Hylomyscus, our findings were consistent with known species, confirmed the existence of four unnamed taxa and suggested the existence of potentially new species. © 2013 John Wiley & Sons Ltd.
The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics.
Lyon, Yana A; Riggs, Dylan; Fornelli, Luca; Compton, Philip D; Julian, Ryan R
2018-01-01
Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. Graphical abstract ᅟ.
The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics
NASA Astrophysics Data System (ADS)
Lyon, Yana A.; Riggs, Dylan; Fornelli, Luca; Compton, Philip D.; Julian, Ryan R.
2018-01-01
Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. [Figure not available: see fulltext.
Kanjilal, Baishali; Noshadi, Iman; Bautista, Eddy J; Srivastava, Ranjan; Parnas, Richard S
2015-03-01
1,3-propanediol (1,3-PD) was produced with a robust fermentation process using waste glycerol feedstock from biodiesel production and a soil-based bacterial inoculum. An iterative inoculation method was developed to achieve independence from soil and selectively breed bacterial populations capable of glycerol metabolism to 1,3-PD. The inoculum showed high resistance to impurities in the feedstock. 1,3-PD selectivity and yield in batch fermentations was optimized by appropriate nutrient compositions and pH control. The batch yield of 1,3-PD was maximized to ~0.7 mol/mol for industrial glycerol which was higher than that for pure glycerin. 16S rDNA sequencing results show a systematic selective enrichment of 1,3-PD producing bacteria with iterative inoculation and subsequent process control. A statistical design of experiments was carried out on industrial glycerol batches to optimize conditions, which were used to run two continuous flow stirred-tank reactor (CSTR) experiments over a period of >500 h each. A detailed analysis of steady states at three dilution rates is presented. Enhanced specific 1,3-PD productivity was observed with faster dilution rates due to lower levels of solvent degeneration. 1,3-PD productivity, specific productivity, and yield of 1.1 g/l hr, 1.5 g/g hr, and 0.6 mol/mol of glycerol were obtained at a dilution rate of 0.1 h(-1)which is bettered only by pure strains in pure glycerin feeds.
A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts
ERIC Educational Resources Information Center
Schlauch, Robert S.; Carney, Edward
2007-01-01
Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…
Studying Weather and Climate Extremes in a Non-stationary Framework
NASA Astrophysics Data System (ADS)
Wu, Z.
2010-12-01
The study of weather and climate extremes often uses the theory of extreme values. Such a detection method has a major problem: to obtain the probability distribution of extremes, one has to implicitly assume the Earth’s climate is stationary over a long period within which the climatology is defined. While such detection makes some sense in a purely statistical view of stationary processes, it can lead to misleading statistical properties of weather and climate extremes caused by long term climate variability and change, and may also cause enormous difficulty in attributing and predicting these extremes. To alleviate this problem, here we report a novel non-stationary framework for studying weather and climate extremes in a non-stationary framework. In this new framework, the weather and climate extremes will be defined as timescale-dependent quantities derived from the anomalies with respect to non-stationary climatologies of different timescales. With this non-stationary framework, the non-stationary and nonlinear nature of climate system will be taken into account; and the attribution and the prediction of weather and climate extremes can then be separated into 1) the change of the statistical properties of the weather and climate extremes themselves and 2) the background climate variability and change. The new non-stationary framework will use the ensemble empirical mode decomposition (EEMD) method, which is a recent major improvement of the Hilbert-Huang Transform for time-frequency analysis. Using this tool, we will adaptively decompose various weather and climate data from observation and climate models in terms of the components of the various natural timescales contained in the data. With such decompositions, the non-stationary statistical properties (both spatial and temporal) of weather and climate anomalies and of their corresponding climatologies will be analyzed and documented.
Estimating cotton nitrogen nutrition status using leaf greenness and ground cover information
USDA-ARS?s Scientific Manuscript database
Assessing nitrogen (N) status is important from economic and environmental standpoints. To date, many spectral indices to estimate cotton chlorophyll or N content have been purely developed using statistical analysis approach where they are often subject to site-specific problems. This study describ...
What Is a Hydrogen Bond? Resonance Covalency in the Supramolecular Domain
ERIC Educational Resources Information Center
Weinhold, Frank; Klein, Roger A.
2014-01-01
We address the broader conceptual and pedagogical implications of recent recommendations of the International Union of Pure and Applied Chemistry (IUPAC) concerning the re-definition of hydrogen bonding, drawing upon the recommended IUPAC statistical methodology of mutually correlated experimental and theoretical descriptors to operationally…
Mathematical Modeling and Pure Mathematics
ERIC Educational Resources Information Center
Usiskin, Zalman
2015-01-01
Common situations, like planning air travel, can become grist for mathematical modeling and can promote the mathematical ideas of variables, formulas, algebraic expressions, functions, and statistics. The purpose of this article is to illustrate how the mathematical modeling that is present in everyday situations can be naturally embedded in…
Eckard, P R; Taylor, L T
1997-02-01
The supercritical fluid extraction (SFE) of an ionic compound, pseudoephedrine hydrochloride, from a spiked-sand surface was successfully demonstrated. The effect of carbon dioxide density (CO2), supercritical fluid composition (pure vs. methanol modified), and the addition of a commonly used reversed-phase liquid chromatographic ion-pairing reagent, 1-heptanesulfonic acid, sodium salt, on extraction efficiency was examined. The extraction recoveries of pseudoephedrine hydrochloride with the addition of the ion-pairing reagent from a spiked-sand surface were shown to be statistically greater than the extraction recoveries without the ion-pairing reagent with both pure and methanol-modified carbon dioxide.
Decoherence and thermalization of a pure quantum state in quantum field theory.
Giraud, Alexandre; Serreau, Julien
2010-06-11
We study the real-time evolution of a self-interacting O(N) scalar field initially prepared in a pure, coherent quantum state. We present a complete solution of the nonequilibrium quantum dynamics from a 1/N expansion of the two-particle-irreducible effective action at next-to-leading order, which includes scattering and memory effects. We demonstrate that, restricting one's attention (or ability to measure) to a subset of the infinite hierarchy of correlation functions, one observes an effective loss of purity or coherence and, on longer time scales, thermalization. We point out that the physics of decoherence is well described by classical statistical field theory.
Raman scattering studies on PEG functionalized hydroxyapatite nanoparticles
NASA Astrophysics Data System (ADS)
Yamini, D.; Devanand Venkatasubbu, G.; Kumar, J.; Ramakrishnan, V.
2014-01-01
The pure hydroxyapatite (HAP) nanoparticles (NPs) have been synthesized by wet chemical precipitation method. Raman spectral measurements have been made for pure HAP, pure Polyethylene glycol (PEG) 6000 and PEG coated HAP in different mass ratios (sample 1, sample 2 and sample 3). The peaks observed in Raman spectrum of pure HAP and the XRD pattern have confirmed the formation of HAP NPs. Vibrational modes have been assigned for pure HAP and pure PEG 6000. The observed variation in peak position of Raman active vibrational modes of PEG in PEG coated HAP has been elucidated in this work, in terms of intermolecular interactions between PEG and HAP. Further these results suggest that the functionalization of nanoparticles may be independent of PEG mass.
Pure E and B polarization maps via Wiener filtering
NASA Astrophysics Data System (ADS)
Bunn, Emory F.; Wandelt, Benjamin
2017-08-01
In order to draw scientific conclusions from observations of cosmic microwave background (CMB) polarization, it is necessary to separate the contributions of the E and B components of the data. For data with incomplete sky coverage, there are ambiguous modes, which can be sourced by either E or B signals. Techniques exist for producing "pure" E and B maps, which are guaranteed to be free of cross-contamination, although the standard method, which involves constructing an eigenbasis, has a high computational cost. We show that such pure maps can be thought of as resulting from the application of a Wiener filter to the data. This perspective leads to far more efficient methods of producing pure maps. Moreover, by expressing the idea of purification in the general framework of Wiener filtering (i.e., maximization of a posterior probability), it leads to a variety of generalizations of the notion of pure E and B maps, e.g., accounting for noise or other contaminants in the data as well as correlations with temperature anisotropy.
NASA Astrophysics Data System (ADS)
Undre, Pallavi G.; Birajdar, Shankar D.; Kathare, R. V.; Jadhav, K. M.
2018-05-01
In this work pure and Ni-doped ZnO nanoparticles have been prepared by sol-gel method. Influence of nickel doping on structural, morphological and magnetic properties of prepared nanoparticles was investigated by X-ray diffraction technique (XRD), Scanning electron microscopy (SEM) and Pulse field magnetic hysteresis loop. X-ray diffraction pattern shows the formation of a single phase with hexagonal wurtzite structure of both pure and Ni-doped ZnO nanoparticles. The lattice parameters `an' and `c' of Ni-doped ZnO is slightly less than that of pure ZnO nanoparticles. The crystalline size of prepared nanoparticles is found to be in 29 and 31 nm range. SEM technique used to examine the surface morphology of samples, SEM image confirms the nanocrystalline nature of present samples. From the pulse field hysteresis loop technique pure and Ni-doped ZnO nanoparticles show diamagnetic and ferromagnetic behavior at room temperature respectively.
Lee, Eun-Young; Jun, Sul-Gi; Wright, Robert F.
2015-01-01
PURPOSE To compare the shear bond strength of various veneering materials to grade II commercially pure titanium (CP-Ti). MATERIALS AND METHODS Thirty specimens of CP-Ti disc with 9 mm diameter and 10 mm height were divided into three experimental groups. Each group was bonded to heat-polymerized acrylic resin (Lucitone 199), porcelain (Triceram), and indirect composite (Sinfony) with 7 mm diameter and 2 mm height. For the control group (n=10), Lucitone 199 were applied on type IV gold alloy castings. All samples were thermocycled for 5000 cycles in 5-55℃ water. The maximum shear bond strength (MPa) was measured with a Universal Testing Machine. After the shear bond strength test, the failure mode was assessed with an optic microscope and a scanning electron microscope. Statistical analysis was carried out with a Kruskal-Wallis Test and Mann-Whitney Test. RESULTS The mean shear bond strength and standard deviations for experimental groups were as follows: Ti-Lucitone 199 (12.11 ± 4.44 MPa); Ti-Triceram (11.09 ± 1.66 MPa); Ti-Sinfony (4.32 ± 0.64 MPa). All of these experimental groups showed lower shear bond strength than the control group (16.14 ± 1.89 MPa). However, there was no statistically significant difference between the Ti-Lucitone 199 group and the control group, and the Ti-Lucitone 199 group and the Ti-Triceram group. Most of the failure patterns in all experimental groups were adhesive failures. CONCLUSION The shear bond strength of veneering materials such as heat-polymerized acrylic resin, porcelain, and indirect composite to CP-Ti was compatible to that of heatpolymerized acrylic resin to cast gold alloy. PMID:25722841
Comparative assessment of antimicrobial efficacy of different hand sanitizers: An in vitro study
Jain, Vardhaman Mulchand; Karibasappa, Gundabaktha Nagappa; Dodamani, Arun Suresh; Prashanth, Vishwakarma K.; Mali, Gaurao Vasant
2016-01-01
Background: To evaluate the antimicrobial efficacy of four different hand sanitizers against Staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas aeruginosa, Escherichia coli, and Enterococcus faecalis as well as to assess and compare the antimicrobial effectiveness among four different hand sanitizers. Materials and Methods: The present study is an in vitro study to evaluate antimicrobial efficacy of Dettol, Lifebuoy, PureHands, and Sterillium hand sanitizers against clinical isolates of the aforementioned test organisms. The well variant of agar disk diffusion test using Mueller-Hinton agar was used for evaluating the antimicrobial efficacy of hand sanitizers. McFarland 0.5 turbidity standard was taken as reference to adjust the turbidity of bacterial suspensions. Fifty microliters of the hand sanitizer was introduced into each of the 4 wells while the 5th well incorporated with sterile water served as a control. This was done for all the test organisms and plates were incubated in an incubator for 24 h at 37C. After incubation, antimicrobial effectiveness was determined using digital caliper (mm) by measuring the zone of inhibition. Results: The mean diameters of zones of inhibition (in mm) observed in Group A (Sterillium), Group B (PureHands), Group C (Lifebuoy), and Group D (Dettol) were 22 ± 6, 7.5 ± 0.5, 9.5 ± 1.5, and 8 ± 1, respectively. Maximum inhibition was found with Group A against all the tested organisms. Data were statistically analyzed using analysis of variance, followed by post hoc test for group-wise comparisons. The difference in the values of different sanitizers was statistically significant at P < 0.001. Conclusion: Sterillium was the most effective hand sanitizer to maintain the hand hygiene. PMID:27857768
Dangre, Pankaj; Gilhotra, Ritu; Dhole, Shashikant
2016-10-01
The present investigation is aimed to design a statistically optimized self-microemulsifying drug delivery system (SMEDDS) of eprosartan mesylate (EM). Preliminary screening was carried out to find a suitable combination of various excipients for the formulation. A 3(2) full factorial design was employed to determine the effect of various independent variables on dependent (response) variables. The independent variables studied in the present work were concentration of oil (X 1) and the ratio of S mix (X 2), whereas the dependent variables were emulsification time (s), globule size (nm), polydispersity index (pdi), and zeta potential (mV), and the multiple linear regression analysis (MLRA) was employed to understand the influence of independent variables on dependent variables. Furthermore, a numerical optimization technique using the desirability function was used to develop a new optimized formulation with desired values of dependent variables. The optimized SMEDDS formulation of eprosartan mesylate (EMF-O) by the above method exhibited emulsification time, 118.45 ± 1.64 s; globule size, 196.81 ± 1.29 nm; zeta potential, -9.34 ± 1.2 mV, and polydispersity index, 0.354 ± 0.02. For the in vitro dissolution study, the optimized formulation (EMF-O) and pure drug were separately entrapped in the dialysis bag, and the study indicated higher release of the drug from EMF-O. In vivo pharmacokinetic studies in Wistar rats using PK solver software revealed 2.1-fold increment in oral bioavailability of EM from EMF-O, when compared with plain suspension of pure drug.
NASA Astrophysics Data System (ADS)
Simatos, N.; Perivolaropoulos, L.
2001-01-01
We use the publicly available code CMBFAST, as modified by Pogosian and Vachaspati, to simulate the effects of wiggly cosmic strings on the cosmic microwave background (CMB). Using the modified CMBFAST code, which takes into account vector modes and models wiggly cosmic strings by the one-scale model, we go beyond the angular power spectrum to construct CMB temperature maps with a resolution of a few degrees. The statistics of these maps are then studied using conventional and recently proposed statistical tests optimized for the detection of hidden temperature discontinuities induced by the Gott-Kaiser-Stebbins effect. We show, however, that these realistic maps cannot be distinguished in a statistically significant way from purely Gaussian maps with an identical power spectrum.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
ERIC Educational Resources Information Center
Shih, Ching-Lin; Wang, Wen-Chung
2009-01-01
The multiple indicators, multiple causes (MIMIC) method with a pure short anchor was proposed to detect differential item functioning (DIF). A simulation study showed that the MIMIC method with an anchor of 1, 2, 4, or 10 DIF-free items yielded a well-controlled Type I error rate even when such tests contained as many as 40% DIF items. In general,…
Single-cell forensic short tandem repeat typing within microfluidic droplets.
Geng, Tao; Novak, Richard; Mathies, Richard A
2014-01-07
A short tandem repeat (STR) typing method is developed for forensic identification of individual cells. In our strategy, monodisperse 1.5 nL agarose-in-oil droplets are produced with a high frequency using a microfluidic droplet generator. Statistically dilute single cells, along with primer-functionalized microbeads, are randomly compartmentalized in the droplets. Massively parallel single-cell droplet polymerase chain reaction (PCR) is performed to transfer replicas of desired STR targets from the single-cell genomic DNA onto the coencapsulated microbeads. These DNA-conjugated beads are subsequently harvested and reamplified under statistically dilute conditions for conventional capillary electrophoresis (CE) STR fragment size analysis. The 9-plex STR profiles of single cells from both pure and mixed populations of GM09947 and GM09948 human lymphoid cells show that all alleles are correctly called and allelic drop-in/drop-out is not observed. The cell mixture study exhibits a good linear relationship between the observed and input cell ratios in the range of 1:1 to 10:1. Additionally, the STR profile of GM09947 cells could be deduced even in the presence of a high concentration of cell-free contaminating 9948 genomic DNA. Our method will be valuable for the STR analysis of samples containing mixtures of cells/DNA from multiple contributors and for low-concentration samples.
Dynamical Classifications of the Kuiper Belt
NASA Astrophysics Data System (ADS)
Maggard, Steven; Ragozzine, Darin
2018-04-01
The Minor Planet Center (MPC) contains a plethora of observational data on thousands of Kuiper Belt Objects (KBOs). Understanding their orbital properties refines our understanding of the formation of the solar system. My analysis pipeline, BUNSHIN, uses Bayesian methods to take the MPC observations and generate 30 statistically weighted orbital clones for each KBO that are propagated backwards along their orbits until the beginning of the solar system. These orbital integrations are saved as REBOUND SimulationArchive files (Rein & Tamayo 2017) which we will make publicly available, allowing many others to perform statistically-robust dynamical classification or complex dynamical investigations of outer solar system small bodies.This database has been used to expand the known collisional family members of the dwarf planet Haumea. Detailed orbital integrations are required to determine the dynamical distances between family members, in the form of "Delta v" as measured from conserved proper orbital elements (Ragozzine & Brown 2007). Our preliminary results have already ~tripled the number of known Haumea family members, allowing us to show that the Haumea family can be identified purely through dynamical clustering.We will discuss the methods associated with BUNSHIN and the database it generates, the refinement of the updated Haumea family, a brief search for other possible clusterings in the outer solar system, and the potential of our research to aid other dynamicists.
NASA Astrophysics Data System (ADS)
Magyar, Andrew
The recent discovery of cells that respond to purely conceptual features of the environment (particular people, landmarks, objects, etc) in the human medial temporal lobe (MTL), has raised many questions about the nature of the neural code in humans. The goal of this dissertation is to develop a novel statistical method based upon maximum likelihood regression which will then be applied to these experiments in order to produce a quantitative description of the coding properties of the human MTL. In general, the method is applicable to any experiments in which a sequence of stimuli are presented to an organism while the binary responses of a large number of cells are recorded in parallel. The central concept underlying the approach is the total probability that a neuron responds to a random stimulus, called the neuronal sparsity. The model then estimates the distribution of response probabilities across the population of cells. Applying the method to single-unit recordings from the human medial temporal lobe, estimates of the sparsity distributions are acquired in four regions: the hippocampus, the entorhinal cortex, the amygdala, and the parahippocampal cortex. The resulting distributions are found to be sparse (large fraction of cells with a low response probability) and highly non-uniform, with a large proportion of ultra-sparse neurons that possess a very low response probability, and a smaller population of cells which respond much more frequently. Rammifications of the results are discussed in relation to the sparse coding hypothesis, and comparisons are made between the statistics of the human medial temporal lobe cells and place cells observed in the rodent hippocampus.
Statistical characterization of Earth’s heterogeneities from seismic scattering
NASA Astrophysics Data System (ADS)
Zheng, Y.; Wu, R.
2009-12-01
The distortion of a teleseismic wavefront carries information about the heterogeneities through which the wave propagates and it is manifestited as logarithmic amplitude (logA) and phase fluctuations of the direct P wave recorded by a seismic network. By cross correlating the fluctuations (e.g., logA-logA or phase-phase), we obtain coherence functions, which depend on spatial lags between stations and incident angles between the incident waves. We have mathematically related the depth-dependent heterogeneity spectrum to the observable coherence functions using seismic scattering theory. We will show that our method has sharp depth resolution. Using the HiNet seismic network data in Japan, we have inverted power spectra for two depth ranges, ~0-120km and below ~120km depth. The coherence functions formed by different groups of stations or by different groups of earthquakes at different back azimuths are similar. This demonstrates that the method is statistically stable and the inhomogeneities are statistically stationary. In both depth intervals, the trend of the spectral amplitude decays from large scale to small scale in a power-law fashion with exceptions at ~50km for the logA data. Due to the spatial spacing of the seismometers, only information from length scale 15km to 200km is inverted. However our scattering method provides new information on small to intermediate scales that are comparable to scales of the recycled materials and thus is complimentary to the global seismic tomography which reveals mainly large-scale heterogeneities on the order of ~1000km. The small-scale heterogeneities revealed here are not likely of pure thermal origin. Therefore, the length scale and strength of heterogeneities as a function of depth may provide important constraints in mechanical mixing of various components in the mantle convection.
PSYCHE Pure Shift NMR Spectroscopy.
Foroozandeh, Mohammadali; Morris, Gareth; Nilsson, Mathias
2018-03-13
Broadband homodecoupling techniques in NMR, also known as "pure shift" methods, aim to enhance spectral resolution by suppressing the effects of homonuclear coupling interactions to turn multiplet signals into singlets. Such techniques typically work by selecting a subset of "active" nuclear spins to observe, and selectively inverting the remaining, "passive", spins to reverse the effects of coupling. Pure Shift Yielded by Chirp Excitation (PSYCHE) is one such method; it is relatively recent, but has already been successfully implemented in a range of different NMR experiments. Paradoxically, PSYCHE is one of the trickiest of pure shift NMR techniques to understand but one of the easiest to use. Here we offer some insights into theoretical and practical aspects of the method, and into the effects and importance of the experimental parameters. Some recent improvements that enhance the spectral purity of PSYCHE spectra will be presented, and some experimental frameworks including examples in 1D and 2D NMR spectroscopy, for the implementation of PSYCHE will be introduced. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2015-04-01
Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.
Making Pure Fine-Grained Inorganic Powder
NASA Technical Reports Server (NTRS)
Wood, C.
1985-01-01
Sustained arc plasma chemical reactor fabricates very-fine-grained inorganic solids having low thermal conductivity. Powder fabrication method, based on plasma tube technique produces pure solids without contamination commonly produced by grinding.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
Comparison of five methods for extraction of Legionella pneumophila from respiratory specimens.
Wilson, Deborah; Yen-Lieberman, Belinda; Reischl, Udo; Warshawsky, Ilka; Procop, Gary W
2004-12-01
The efficiencies of five commercially available nucleic acid extraction methods were evaluated for the recovery of a standardized inoculum of Legionella pneumophila in respiratory specimens (sputum and bronchoalveolar lavage [BAL] specimens). The concentrations of Legionella DNA recovered from sputa with the automated MagNA Pure (526,200 CFU/ml) and NucliSens (171,800 CFU/ml) extractors were greater than those recovered with the manual methods (i.e., Roche High Pure kit [133,900 CFU/ml], QIAamp DNA Mini kit [46,380 CFU/ml], and ViralXpress kit [13,635 CFU/ml]). The rank order was the same for extracts from BAL specimens, except that for this specimen type the QIAamp DNA Mini kit recovered more than the Roche High Pure kit.
Using Symbolic-Logic Matrices To Improve Confirmatory Factor Analysis Techniques.
ERIC Educational Resources Information Center
Creighton, Theodore B.; Coleman, Donald G.; Adams, R. C.
A continuing and vexing problem associated with survey instrument development is the creation of items, initially, that correlate favorably a posteriori with constructs being measured. This study tests the use of symbolic-logic matrices developed by D. G. Coleman (1979) in creating factorially "pure" statistically discrete constructs in…
Microstructural Evolution During Friction Stir Welding of Near-Alpha Titanium
2009-02-01
completion of the weld and the weld end was quenched with cold water. This process was intended to preserve the microstructure surrounding the...limited the statistics supporting this result. 16 Mironov et al. [31] also measured the texture developed from friction stir processing of pure iron
Diagnostics of Tree Diseases Caused by Phytophthora austrocedri Species.
Mulholland, Vincent; Elliot, Matthew; Green, Sarah
2015-01-01
We present methods for the detection and quantification of four Phytophthora species which are pathogenic on trees; Phytophthora ramorum, Phytophthora kernoviae, Phytophthora lateralis, and Phytophthora austrocedri. Nucleic acid extraction methods are presented for phloem tissue from trees, soil, and pure cultures on agar plates. Real-time PCR methods are presented and include primer and probe sets for each species, general advice on real-time PCR setup and data analysis. A method for sequence-based identification, useful for pure cultures, is also included.
Uehleke, Bernhard; Hopfenmueller, Werner; Stange, Rainer; Saller, Reinhard
2012-01-01
Ancient and medieval herbal books are often believed to describe the same claims still in use today. Medieval herbal books, however, provide long lists of claims for each herb, most of which are not approved today, while the herb's modern use is often missing. So the hypothesis arises that a medieval author could have randomly hit on 'correct' claims among his many 'wrong' ones. We developed a statistical procedure based on a simple probability model. We applied our procedure to the herbal books of Hildegard von Bingen (1098- 1179) as an example for its usefulness. Claim attributions for a certain herb were classified as 'correct' if approximately the same as indicated in actual monographs. The number of 'correct' claim attributions was significantly higher than it could have been by pure chance, even though the vast majority of Hildegard von Bingen's claims were not 'correct'. The hypothesis that Hildegard would have achieved her 'correct' claims purely by chance can be clearly rejected. The finding that medical claims provided by a medieval author are significantly related to modern herbal use supports the importance of traditional medicinal systems as an empirical source. However, since many traditional claims are not in accordance with modern applications, they should be used carefully and analyzed in a systematic, statistics-based manner. Our statistical approach can be used for further systematic comparison of herbal claims of traditional sources as well as in the fields of ethnobotany and ethnopharmacology. Copyright © 2012 S. Karger AG, Basel.
Impact of South American heroin on the US heroin market 1993–2004
Ciccarone, Daniel; Unick, George J; Kraus, Allison
2008-01-01
Background The past two decades have seen an increase in heroin-related morbidity and mortality in the United States. We report on trends in US heroin retail price and purity, including the effect of entry of Colombian-sourced heroin on the US heroin market. Methods The average standardized price ($/mg-pure) and purity (% by weight) of heroin from 1993 to 2004 was from obtained from US Drug Enforcement Agency retail purchase data for 20 metropolitan statistical areas. Univariate statistics, robust Ordinary Least Squares regression and mixed fixed and random effect growth curve models were used to predict the price and purity data in each metropolitan statistical area over time. Results Over the 12 study years, heroin price decreased 62%. The median percentage of all heroin samples that are of South American origin increased an absolute 7% per year. Multivariate models suggest percent South American heroin is a significant predictor of lower heroin price and higher purity adjusting for time and demographics. Conclusion These analyses reveal trends to historically low-cost heroin in many US cities. These changes correspond to the entrance into and rapid domination of the US heroin market by Colombian-sourced heroin. The implications of these changes are discussed. PMID:19201184
NASA Astrophysics Data System (ADS)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Analgesic principle from Curcuma amada.
Faiz Hossain, Chowdhury; Al-Amin, Mohammad; Rahman, Kazi Md Mahabubur; Sarker, Aurin; Alam, Md Mahamudul; Chowdhury, Mahmudul Hasan; Khan, Shamsun Nahar; Sultana, Gazi Nurun Nahar
2015-04-02
The rhizome of Curcuma amada has been used as a folk medicine for the treatment of rheumatic disorders in the northern part of Bangladesh and has also used for the treatment of inflammation and fever in the Ayurvedic and Unani systems of medicine. Aim of the study was to investigate the analgesic principle of the MeOH extract of the rhizome of Curcuma amada by an in vivo bioassay guided chromatographic separation and purification, and the structure elucidation of the purified compound by spectroscopic methods. Dried powder of Curcuma amada rhizomes was extracted with MeOH. The analgesic activity of the crude extract and its chromatographic fractions as well as the purified compound itself was evaluated by the acetic acid induced writhing method and the formalin induced licking test in Swiss albino mice. The MeOH extract was separated by chromatographic methods and the pure active compound was purified by crystallization in hexanes. The structure of the pure compound was then elucidated by spectroscopic methods. The MeOH extract of Curcuma amada exhibited 41.63% and 45.53% inhibitions in the acetic acid induced writhing method at doses of 200mg/kg and 400mg/kg, respectively. It also exerted 20.43% and 28.50% inhibitions in early phase at doses of 200mg/kg and 400mg/kg, respectively, and 30.41% and 42.95% inhibitions in late phase at doses of 200mg/kg and 400mg/kg, respectively in the formalin induced licking test. Vacuum Liquid Chromatography (VLC) of crude extract yielded five fractions and Fr. 1 was found to have the most potent analgesic activity with inhibitions of 36.96% in the acetic acid induced writhing method and 47.51% (early phase), 39.50% (late phase) in the formalin induced licking test at a dose of 200mg/kg. Column chromatography of Fr. 1 on silica gel generated seven fractions (SF. 1-SF. 7). SF. 2 showed the most potent activity with inhibition of 49.81% in the acetic acid induced writhing method at a dose of 100mg/kg. Crystallization of SF. 2 yielded (1) (zederone, 520mg). It showed statistically significant inhibitions of 38.91% and 52.14% in the acetic acid induced writhing method at doses of 20mg/kg and 40mg/kg, respectively. Moreover, it also showed statistically significant inhibitions of 27.79% and 29.93% (early phase) and of 38.24% and 46.08% (late phase) in the formalin induced licking test at doses of 20mg/kg and 40mg/kg, respectively. Isolation and characterization of zederone (1) as analgesic principle of Curcuma amada corroborate its use in Ayurvedic, Unani and folk medicines for the treatment of rheumatic disorders and also contributing to its pharmacological validation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
NASA Astrophysics Data System (ADS)
Jesse, S.; Chi, M.; Belianinov, A.; Beekman, C.; Kalinin, S. V.; Borisevich, A. Y.; Lupini, A. R.
2016-05-01
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. Here, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in nature and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. However, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.
Raman scattering studies on PEG functionalized hydroxyapatite nanoparticles.
Yamini, D; Devanand Venkatasubbu, G; Kumar, J; Ramakrishnan, V
2014-01-03
The pure hydroxyapatite (HAP) nanoparticles (NPs) have been synthesized by wet chemical precipitation method. Raman spectral measurements have been made for pure HAP, pure Polyethylene glycol (PEG) 6000 and PEG coated HAP in different mass ratios (sample 1, sample 2 and sample 3). The peaks observed in Raman spectrum of pure HAP and the XRD pattern have confirmed the formation of HAP NPs. Vibrational modes have been assigned for pure HAP and pure PEG 6000. The observed variation in peak position of Raman active vibrational modes of PEG in PEG coated HAP has been elucidated in this work, in terms of intermolecular interactions between PEG and HAP. Further these results suggest that the functionalization of nanoparticles may be independent of PEG mass. Copyright © 2013 Elsevier B.V. All rights reserved.
Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa
2017-09-01
A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.
Synchronization using pulsed edge tracking in optical PPM communication system
NASA Technical Reports Server (NTRS)
Gagliardi, R.
1972-01-01
A pulse position modulated (PPM) optical communication system using narrow pulses of light for data transmission requires accurate time synchronization between transmitter and receiver. The presence of signal energy in the form of optical pulses suggests the use of a pulse edge tracking method of maintaining the necessary timing. The edge tracking operation in a binary PPM system is examined, taking into account the quantum nature of the optical transmissions. Consideration is given first to pure synchronization using a periodic pulsed intensity, then extended to the case where position modulation is present and auxiliary bit decisioning is needed to aid the tracking operation. Performance analysis is made in terms of timing error and its associated statistics. Timing error variances are shown as a function of system signal to noise ratio.
Cellular response of chondrocytes to magnesium alloys for orthopedic applications
LIAO, YI; XU, QINGLI; ZHANG, JIAN; NIU, JIALING; YUAN, GUANGYIN; JIANG, YAO; HE, YAOHUA; WANG, XINLING
2015-01-01
In the present study, the effects of Mg-Nd-Zn-Zr (JDBM), brushite (CaHPO4·2H2O)-coated JDBM (C-JDBM), AZ31, WE43, pure magnesium (Mg) and Ti alloy (TC4) on rabbit chondrocytes were investigated in vitro. Adhesion experiments revealed the satisfactory morphology of chondrocytes on the surface of all samples. An indirect cytotoxicity test using MTT assay revealed that C-JDBM and TC4 exhibited results similar to those of the negative control, better than those obtained with JDBM, AZ31, WE43 and pure Mg (p<0.05). There were no statistically significant differences observed between the JDBM, AZ31, WE43 and pure Mg group (p>0.05). The results of indirect cell cytotoxicity and proliferation assays, as well as those of apoptosis assay, glycosaminoglycan (GAG) quantification, assessment of collagen II (Col II) levels and RT-qPCR revealed a similar a trend as was observed with MTT assay. These findings suggested that the JDBM alloy was highly biocompatible with chondrocytes in vitro, yielding results similar to those of AZ31, WE43 and pure Mg. Furthermore, CaHPO4·2H2O coating significantly improved the biocompatibility of this alloy. PMID:25975216
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Spatial and space-time clustering of tuberculosis in Gurage Zone, Southern Ethiopia.
Tadesse, Sebsibe; Enqueselassie, Fikre; Hagos, Seifu
2018-01-01
Spatial targeting is advocated as an effective method that contributes for achieving tuberculosis control in high-burden countries. However, there is a paucity of studies clarifying the spatial nature of the disease in these countries. This study aims to identify the location, size and risk of purely spatial and space-time clusters for high occurrence of tuberculosis in Gurage Zone, Southern Ethiopia during 2007 to 2016. A total of 15,805 patient data that were retrieved from unit TB registers were included in the final analyses. The spatial and space-time cluster analyses were performed using the global Moran's I, Getis-Ord [Formula: see text] and Kulldorff's scan statistics. Eleven purely spatial and three space-time clusters were detected (P <0.001).The clusters were concentrated in border areas of the Gurage Zone. There were considerable spatial variations in the risk of tuberculosis by year during the study period. This study showed that tuberculosis clusters were mainly concentrated at border areas of the Gurage Zone during the study period, suggesting that there has been sustained transmission of the disease within these locations. The findings may help intensify the implementation of tuberculosis control activities in these locations. Further study is warranted to explore the roles of various ecological factors on the observed spatial distribution of tuberculosis.
A fractal growth model: Exploring the connection pattern of hubs in complex networks
NASA Astrophysics Data System (ADS)
Li, Dongyan; Wang, Xingyuan; Huang, Penghe
2017-04-01
Fractal is ubiquitous in many real-world networks. Previous researches showed that the strong disassortativity between the hub-nodes on all length scales was the key principle that gave rise to the fractal architecture of networks. Although fractal property emerged in some models, there were few researches about the fractal growth model and quantitative analyses about the strength of the disassortativity for fractal model. In this paper, we proposed a novel inverse renormalization method, named Box-based Preferential Attachment (BPA), to build the fractal growth models in which the Preferential Attachment was performed at box level. The proposed models provided a new framework that demonstrated small-world-fractal transition. Also, we firstly demonstrated the statistical characteristic of connection patterns of the hubs in fractal networks. The experimental results showed that, given proper growing scale and added edges, the proposed models could clearly show pure small-world or pure fractal or both of them. It also showed that the hub connection ratio showed normal distribution in many real-world networks. At last, the comparisons of connection pattern between the proposed models and the biological and technical networks were performed. The results gave useful reference for exploring the growth principle and for modeling the connection patterns for real-world networks.
Interactive semiautomatic contour delineation using statistical conditional random fields framework.
Hu, Yu-Chi; Grossberg, Michael D; Wu, Abraham; Riaz, Nadeem; Perez, Carmen; Mageras, Gig S
2012-07-01
Contouring a normal anatomical structure during radiation treatment planning requires significant time and effort. The authors present a fast and accurate semiautomatic contour delineation method to reduce the time and effort required of expert users. Following an initial segmentation on one CT slice, the user marks the target organ and nontarget pixels with a few simple brush strokes. The algorithm calculates statistics from this information that, in turn, determines the parameters of an energy function containing both boundary and regional components. The method uses a conditional random field graphical model to define the energy function to be minimized for obtaining an estimated optimal segmentation, and a graph partition algorithm to efficiently solve the energy function minimization. Organ boundary statistics are estimated from the segmentation and propagated to subsequent images; regional statistics are estimated from the simple brush strokes that are either propagated or redrawn as needed on subsequent images. This greatly reduces the user input needed and speeds up segmentations. The proposed method can be further accelerated with graph-based interpolation of alternating slices in place of user-guided segmentation. CT images from phantom and patients were used to evaluate this method. The authors determined the sensitivity and specificity of organ segmentations using physician-drawn contours as ground truth, as well as the predicted-to-ground truth surface distances. Finally, three physicians evaluated the contours for subjective acceptability. Interobserver and intraobserver analysis was also performed and Bland-Altman plots were used to evaluate agreement. Liver and kidney segmentations in patient volumetric CT images show that boundary samples provided on a single CT slice can be reused through the entire 3D stack of images to obtain accurate segmentation. In liver, our method has better sensitivity and specificity (0.925 and 0.995) than region growing (0.897 and 0.995) and level set methods (0.912 and 0.985) as well as shorter mean predicted-to-ground truth distance (2.13 mm) compared to regional growing (4.58 mm) and level set methods (8.55 mm and 4.74 mm). Similar results are observed in kidney segmentation. Physician evaluation of ten liver cases showed that 83% of contours did not need any modification, while 6% of contours needed modifications as assessed by two or more evaluators. In interobserver and intraobserver analysis, Bland-Altman plots showed our method to have better repeatability than the manual method while the delineation time was 15% faster on average. Our method achieves high accuracy in liver and kidney segmentation and considerably reduces the time and labor required for contour delineation. Since it extracts purely statistical information from the samples interactively specified by expert users, the method avoids heuristic assumptions commonly used by other methods. In addition, the method can be expanded to 3D directly without modification because the underlying graphical framework and graph partition optimization method fit naturally with the image grid structure.
El Khoury, Mona; Sanchez, Lilia Maria; Lalonde, Lucie; Trop, Isabelle; David, Julie; Mesurolle, Benoît
2017-04-01
To assess the impact on the final outcome at surgery of flat epithelial atypia (FEA) when found concomitantly with lobular neoplasia (LN) in biopsy specimens compared with pure biopsy-proven FEA. The approval from the institutional review board of the CHUM (Centre Hospitalier Universitaire de Montréal) was obtained. A retrospective review of our database between 2009 and 2013 identified 81 females (mean age 54 years, range 38-90 years) with 81 FEA biopsy-proven lesions. These were pure or associated with LN only in 59/81 (73%) and 22/81 (27%) cases, respectively. Overall, 57/81 (70%) patients underwent surgery and 24/81 (30%) patients underwent mammographic surveillance with a mean follow-up of 36 months. FEA presented more often as microcalcifications in 68/81 (84%) patients and were mostly amorphous in 49/68 (72%). After excluding radio pathologically discordant cases, pure FEA proved to be malignant at surgery in 1/41 (2%; 95% confidence interval 0.06-12.9). There was no statistically significant difference in the upgrade to malignancy whether FEA lesions were pure or associated to LN at biopsy (p = 0.4245); however, when paired in biopsy specimens, these lesions were more frequently associated with atypical ductal hyperplasia (ADH) at surgery than with pure FEA (p = 0.012). Our results show a 2% upgrade rate to malignancy of pure FEA lesions. When FEA is found in association with LN at biopsy, surgical excision yields more frequently ADH than pure FEA thus warranting close surveillance or even surgical excision. Advances in knowledge: The association of LN with FEA at biopsy was more frequently associated with ADH at surgery than with pure FEA. If a biopsy-proven FEA lesion is deemed concordant with the imaging finding, when paired with LN at biopsy, careful surveillance or even surgical excision is suggested.
Egan, Sarah J; van Noort, Emily; Chee, Abby; Kane, Robert T; Hoiles, Kimberley J; Shafran, Roz; Wade, Tracey D
2014-12-01
Previous research has shown cognitive-behavioural treatment (CBT) to be effective in reducing perfectionism. The present study investigated the efficacy of two formats of CBT for perfectionism (CBT-P), face-to-face and pure online self-help, in reducing perfectionism and associated psychological symptoms. Participants were randomly allocated to face-to-face CBT-P (n = 18), pure online self-help CBT-P (n = 16), or a waitlist control period (n = 18). There was no significant change for the waitlist group on any of the outcome measures at the end of treatment. Both the face-to-face and pure online self-help groups reported significant reductions at the end of treatment for the perfectionism variables which were maintained at the 6-month follow-up. The face-to-face group also reported significant reductions over this time in depression, anxiety, and stress, and a significant pre-post increase in self-esteem, all of which were maintained at the 6-month follow-up. In contrast, the pure online self-help group showed no significant changes on these outcomes. The face-to-face group was statistically superior to the pure online self-help group at follow-up on the perfectionism measures, concern over mistakes and personal standards. The results show promising evidence for CBT for perfectionism, especially when offered face to face, where sustained benefit across a broad range of outcomes can be expected. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasqualini, Donatella
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less
NASA Astrophysics Data System (ADS)
Xu, Guan; Johnson, Laura A.; Hu, Jack; Dillman, Jonathan R.; Higgins, Peter D. R.; Wang, Xueding
2015-03-01
Crohn's disease (CD) is an autoimmune disease affecting 700,000 people in the United States. This condition may cause obstructing intestinal narrowings (strictures) due to inflammation, fibrosis (deposition of collagen), or a combination of both. Utilizing the unique strong optical absorption of hemoglobin at 532 nm and collagen at 1370 nm, this study investigated the feasibility of non-invasively characterizing intestinal strictures using photoacoustic imaging (PAI). Three normal controls, ten pure inflammation and 9 inflammation plus fibrosis rat bowel wall samples were imaged. Statistical analysis of the PA measurements has shown the capability of discriminating the purely inflammatory from mixed inflammatory and fibrotic strictures.
Fock expansion of multimode pure Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it
2015-12-15
The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less
Kim, Dong-Hyeon; Kim, Hyunsook; Chon, Jung-Whan; Moon, Jin-San; Song, Kwang-Young; Seo, Kun-Ho
2013-07-15
Blood-yolk-polymyxin B-trimethoprim agar (BYPTA) was developed by the addition of egg yolk, laked horse blood, sodium pyruvate, polymyxin B, and trimethoprim, and compared with mannitol-yolk-polymyxin B agar (MYPA) for the isolation and enumeration of Bacillus cereus (B. cereus) in pure culture and various food samples. In pure culture, there was no statistical difference (p>0.05) between the recoverability and sensitivity of MYPA and BYPTA, whereas BYPTA exhibited higher specificity (p<0.05). To evaluate BYPTA agar with food samples, B. cereus was experimentally spiked into six types of foods, triangle kimbab, sandwich, misugaru, Saengsik, red pepper powder, and soybean paste. No statistical difference was observed in recoverability (p>0.05) between MYPA and BYPTA in all tested foods, whereas BYPTA exhibited higher selectivity than MYPA, especially in foods with high background microflora, such as Saengsik, red pepper powder, and soybean paste. The newly developed selective medium BYPTA could be a useful enumeration tool to assess the level of B. cereus in foods, particularly with high background microflora. Copyright © 2013 Elsevier B.V. All rights reserved.
CARMELLO, Juliana Cabrini; FAIS, Laiza Maria Grassi; RIBEIRO, Lígia Nunes de Moraes; CLARO NETO, Salvador; GUAGLIANONI, Dalton Geraldo; PINELLI, Lígia Antunes Pereira
2012-01-01
The need to develop new dental luting agents in order to improve the success of treatments has greatly motivated research. Objective The aim of this study was to evaluate the diametral tensile strength (DTS) and film thickness (FT) of an experimental dental luting agent derived from castor oil (COP) with or without addition of different quantities of filler (calcium carbonate - CaCO3). Material and Methods Eighty specimens were manufactured (DTS N=40; FT N=40) and divided into 4 groups: Pure COP; COP 10%; COP 50% and zinc phosphate (control). The cements were mixed according to the manufacturers' recommendations and submitted to the tests. The DTS test was performed in the MTS 810 testing machine (10 KN, 0.5 mm/min). For FT test, the cements were sandwiched between two glass plates (2 cm2) and a load of 15 kg was applied vertically on the top of the specimen for 10 min. The data were analyzed by means of one-way ANOVA and Tukey's test (α=0.05). Results The values of DTS (MPa) were: Pure COP- 10.94±1.30; COP 10%- 30.06±0.64; COP 50%- 29.87±0.27; zinc phosphate- 4.88±0.96. The values of FT (µm) were: Pure COP- 31.09±3.16; COP 10%- 17.05±4.83; COP 50%- 13.03±4.83; Zinc Phosphate- 20.00±0.12. One-way ANOVA showed statistically significant differences among the groups (DTS - p=1.01E-40; FT - p=2.4E-10). Conclusion The experimental dental luting agent with 50% of filler showed the best diametral tensile strength and film thickness. PMID:22437672
An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1
NASA Technical Reports Server (NTRS)
Shivarama, Ravishankar; Fahrenthold, Eric P.
2004-01-01
A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.
Homogenising time series: Beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2010-09-01
For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.
Lievens, Christopher W; Connor, Charles G; Murphy, Heather
2003-10-01
The current study evaluates the response of the ocular surface to extended contact lens wear by comparing a new silicone hydrogel lens to an ACUVUE 2 lens. Twenty subjects with an average age of 28 years were randomly assigned to a fitting with ACUVUE 2 or PureVision lenses. Ocular surface assessment by impression cytology was performed at baseline and for the 6 months after initiation of lens wear. Although goblet cell density significantly increased with wear time, no statistically significant difference was observed between the contact lens groups. The average baseline goblet cell percentages were as follows: ACUVUE 2 group, 1.44; PureVision group, 1.11. The 6-month averages were as follows: ACUVUE 2 group, 3.16; PureVision group, 2.22. It appears that silicone hydrogel lenses may be slightly less irritating to the ocular surface than lenses not containing silicone. This could be a promising indicator for successful 30-day continuous wear.
Gallium Nitride Direct Energy Conversion Betavoltaic Modeling and Optimization
2017-03-01
require high energy density battery systems. Radioisotopes are the most energy dense materials that can be converted into electrical energy. Pure...beta radioisotopes can be used towards making a long-lasting battery. However, the process to convert the energy provided by a pure beta radioisotope ...betavoltaic. Each energy conversion method has different challenges to overcome to improve thesystem efficiency. These energy conversion methods that are
Santoni, Brandon; Cabezas, Andres F; Cook, Daniel J; Yeager, Matthew S; Billys, James B; Whiting, Benjamin; Cheng, Boyle C
2015-01-01
Pure-moment loading is the test method of choice for spinal implant evaluation. However, the apparatuses and boundary conditions employed by laboratories in performing spine flexibility testing vary. The purpose of this study was to quantify the differences, if they exist, in intervertebral range of motion (ROM) resulting from different pure-moment loading apparatuses used in two laboratories. Twenty-four (laboratory A) and forty-two (laboratory B) intact L1-S1 specimens were loaded using pure moments (±7.5 Nm) in flexion-extension (FE), lateral bending (LB) and axial torsion (AT). At laboratory A, pure moments were applied using a system of cables, pulleys and suspended weights in 1.5 Nm increments. At laboratory B, specimens were loaded in a pneumatic biaxial test frame mounted with counteracting stepper-motor-driven biaxial gimbals. ROM was obtained in both labs using identical optoelectronic systems and compared. In FE, total L1-L5 ROM was similar, on average, between the two laboratories (lab A: 37.4° ± 9.1°; lab B: 35.0° ± 8.9°, p=0.289). Larger apparent differences, on average, were noted between labs in AT (lab A: 19.4° ± 7.3°; lab B: 15.7° ± 7.1°, p=0.074), and this finding was significant for combined right and left LB (lab A: 45.5° ± 11.4°; lab B: 35.3° ± 8.5°, p < 0.001). To our knowledge, this is the first study comparing ROM of multi-segment lumbar spines between laboratories utilizing different apparatuses. The results of this study show that intervertebral ROM in multi-segment lumbar spine constructs are markedly similar in FE loading. Differences in boundary conditions are likely the source of small and sometimes statistically significant differences between the two techniques in LB and AT ROM. The relative merits of each testing strategy with regard to the physiologic conditions that are to be simulated should be considered in the design of a study including LB and AT modes of loading. An understanding of these differences also serves as important information when comparing study results across different laboratories.
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach
NASA Astrophysics Data System (ADS)
Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo
2002-12-01
We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.
Modeling the pharmacokinetics of extended release pharmaceutical systems
NASA Astrophysics Data System (ADS)
di Muria, Michela; Lamberti, Gaetano; Titomanlio, Giuseppe
2009-03-01
The pharmacokinetic (PK) models predict the hematic concentration of drugs after the administration. In compartment modeling, the body is described by a set of interconnected “vessels” or “compartments”; the modeling consisting of transient mass balances. Usually the orally administered drugs were considered as immediately available: this cannot describe the administration of extended-release systems. In this work we added to the traditional compartment models the ability to account for a delay in administration, relating this delay to in vitro data. Firstly, the method was validated, applying the model to the dosage of nicotine by chewing-gum; the model was tuned by in vitro/in vivo data of drugs (divalproex-sodium and diltiazem) with medium-rate release kinetics, then it was applied in describing in vivo evolutions due to the assumption of fast- and slow-release systems. The model reveals itself predictive, the same of a Level A in vitro/in vivo correlation, but being physically based, it is preferable to a purely statistical method.
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; McLaughlin, Gregory; Lednev, Igor K
2013-09-01
Body fluid traces recovered at crime scenes are among the most common and important types of forensic evidence. However, the ability to characterize a biological stain at a crime scene nondestructively has not yet been demonstrated. Here, we expand the Raman spectroscopic approach for the identification of dry traces of pure body fluids to address the problem of heterogeneous contamination, which can impair the performance of conventional methods. The concept of multidimensional Raman signatures was utilized for the identification of blood in dry traces contaminated with sand, dust, and soil. Multiple Raman spectra were acquired from the samples via automatic scanning, and the contribution of blood was evaluated through the fitting quality using spectroscopic signature components. The spatial mapping technique allowed for detection of "hot spots" dominated by blood contribution. The proposed method has great potential for blood identification in highly contaminated samples. © 2013 American Academy of Forensic Sciences.
A new nanospray drying method for the preparation of nicergoline pure nanoparticles
NASA Astrophysics Data System (ADS)
Martena, Valentina; Censi, Roberta; Hoti, Ela; Malaj, Ledjan; Di Martino, Piera
2012-06-01
Three different batches of pure nanoparticles (NPs) of nicergoline (NIC) were prepared by spray drying a water:ethanol solution by a new Nano Spray Dryer Büchi B-90. Spherical pure NPs were obtained, and several analytical techniques such as differential scanning calorimetry and X-ray powder diffractometry permitted to assess their amorphous character. A comparison of the solubility, intrinsic dissolution, and drug release of original particles and pure amorphous NPs were determined, revealing an interesting improvement of biopharmaceutical properties of amorphous NPs, due to both amorphous properties and nanosize dimensions. Since in a previous work, the high-thermodynamic stability of amorphous NIC was demonstrated, this study is addressed toward the formulation of NIC as pure amorphous NPs.
Testing effects in mixed- versus pure-list designs.
Rowland, Christopher A; Littrell-Baez, Megan K; Sensenig, Amanda E; DeLosh, Edward L
2014-08-01
In the present study, we investigated the role of list composition in the testing effect. Across three experiments, participants learned items through study and initial testing or study and restudy. List composition was manipulated, such that tested and restudied items appeared either intermixed in the same lists (mixed lists) or in separate lists (pure lists). In Experiment 1, half of the participants received mixed lists and half received pure lists. In Experiment 2, all participants were given both mixed and pure lists. Experiment 3 followed Erlebacher's (Psychological Bulletin, 84, 212-219, 1977) method, such that mixed lists, pure tested lists, and pure restudied lists were given to independent groups. Across all three experiments, the final recall results revealed significant testing effects for both mixed and pure lists, with no reliable difference in the magnitude of the testing advantage across list designs. This finding suggests that the testing effect is not subject to a key boundary condition-list design-that impacts other memory phenomena, including the generation effect.
A comparison of radiosity with current methods of sound level prediction in commercial spaces
NASA Astrophysics Data System (ADS)
Beamer, C. Walter, IV; Muehleisen, Ralph T.
2002-11-01
The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.
Cantwell, Caoimhe A; Byrne, Laurann A; Connolly, Cathal D; Hynes, Michael J; McArdle, Patrick; Murphy, Richard A
2017-08-01
The aim of the present work was to establish a reliable analytical method to determine the degree of complexation in commercial metal proteinates used as feed additives in the solid state. Two complementary techniques were developed. Firstly, a quantitative attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopic method investigated modifications in vibrational absorption bands of the ligand on complex formation. Secondly, a powder X-ray diffraction (PXRD) method to quantify the amount of crystalline material in the proteinate product was developed. These methods were developed in tandem and cross-validated with each other. Multivariate analysis (MVA) was used to develop validated calibration and prediction models. The FTIR and PXRD calibrations showed excellent linearity (R 2 > 0.99). The diagnostic model parameters showed that the FTIR and PXRD methods were robust with a root mean square error of calibration RMSEC ≤3.39% and a root mean square error of prediction RMSEP ≤7.17% respectively. Comparative statistics show excellent agreement between the MVA packages assessed and between the FTIR and PXRD methods. The methods can be used to determine the degree of complexation in complexes of both protein hydrolysates and pure amino acids.
NASA Astrophysics Data System (ADS)
Ashour, Safwan; Bahbouh, Mahmoud; Khateeb, Mouhammed
2011-03-01
New, accurate and reliable spectrophotometric methods for the assay of three statin drugs, atorvastatin calcium (AVS), fluvastatin sodium (FVS) and pravastatin sodium (PVS) in pure form and pharmaceutical formulations have been described. All methods involve the oxidative coupling reaction of AVS, FVS and PVS with 3-methyl-2-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) in the presence of Ce(IV) in an acidic medium to form colored products with λmax at 566, 615 and 664 nm, respectively. Beer's law was obeyed in the ranges of 2.0-20.0, 4.9-35.4 and 7.0-30.0 μg mL -1 for AVS-MBTH, FVS-MBTH and PVS-MBTH, respectively. Molar absorptivities for the above three methods were found to be 3.24 × 10 4, 1.05 × 10 4 and 0.68 × 10 4 L mol -1 cm -1, respectively. Statistical treatment of the experimental results indicates that the methods are precise and accurate. The proposed methods have been applied to the determination of the components in commercial forms with no interference from the excipients. A comparative study between the suggested procedures and the official methods for these compounds in the commercial forms showed no significant difference between the two methods.
NASA Astrophysics Data System (ADS)
Fournier, Paul-Guy; Nourtier, Alain; Monkade, Mohammed; Berrada, Khalid; Boughaleb, Hichame; Outzourhit, Abdelkader; Pichon, Rémy; Haut, Christian; Govers, Thomas
2006-03-01
When the euro was introduced, the fact that some coins contain nickel, which is known to be an allergen, gave rise to controversy. More generally, this raises the question of metal transfer from coins to skin. Morocco has used for decades one-dirham coins made of pure or alloyed nickel. Studying their wear, the labile metal on their surface and the transfer to fingers in handling may therefore be especially instructive. Weighing statistics for a sample of 401 coins confirm that cupronickel coins wear out more quickly than pure nickel coins and reveal that the dirham suffers a much stronger wear than other currencies for which wear statistics are available. SEM studies supplemented by ICP quantitative analyses show that the labile metal is mainly made up of chips, even after many handlings. These chips are often cupronickel, even on pure nickel coins, which shows that they are produced by the friction of coins against one another. Secondly, the surface of coins presents sweat residue with an important proportion of copper and a little nickel, which confirms that sweat dissolves surface copper. Depending on the alloy and date, coins have between 20 and 140 μg of labile copper and nickel, with a content of one quarter of nickel on cupronickel coins and about one half on pure nickel coins. The most worn cupronickel coins are the coins that present the largest amount of labile metal, and even labile nickel. In our experiments, the metal transfer to fingers when a cupronickel coin is handled for the first time represents between 4 and 9% of the labile metal and 0.05% of the annual wear. A simple and reliable test of nickel contamination consists in measuring the labile nickel. To cite this article: P.-G. Fournier et al., C. R. Physique 7 (2006).
NASA Astrophysics Data System (ADS)
Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2017-03-01
Reducing the overdiagnosis and overtreatment associated with ductal carcinoma in situ (DCIS) requires accurate prediction of the invasive potential at cancer screening. In this work, we investigated the utility of pre-operative histologic and mammographic features to predict upstaging of DCIS. The goal was to provide intentionally conservative baseline performance using readily available data from radiologists and pathologists and only linear models. We conducted a retrospective analysis on 99 patients with DCIS. Of those 25 were upstaged to invasive cancer at the time of definitive surgery. Pre-operative factors including both the histologic features extracted from stereotactic core needle biopsy (SCNB) reports and the mammographic features annotated by an expert breast radiologist were investigated with statistical analysis. Furthermore, we built classification models based on those features in an attempt to predict the presence of an occult invasive component in DCIS, with generalization performance assessed by receiver operating characteristic (ROC) curve analysis. Histologic features including nuclear grade and DCIS subtype did not show statistically significant differences between cases with pure DCIS and with DCIS plus invasive disease. However, three mammographic features, i.e., the major axis length of DCIS lesion, the BI-RADS level of suspicion, and radiologist's assessment did achieve the statistical significance. Using those three statistically significant features as input, a linear discriminant model was able to distinguish patients with DCIS plus invasive disease from those with pure DCIS, with AUC-ROC equal to 0.62. Overall, mammograms used for breast screening contain useful information that can be perceived by radiologists and help predict occult invasive components in DCIS.
Pure detection of the acoustic spin pumping in Pt/YIG/PZT structures
NASA Astrophysics Data System (ADS)
Uchida, Ken-ichi; Qiu, Zhiyong; Kikkawa, Takashi; Saitoh, Eiji
2014-11-01
The acoustic spin pumping (ASP) stands for the generation of a spin voltage from sound waves in a ferromagnet/paramagnet junction. In this letter, we propose and demonstrate a method for pure detection of the ASP, which enables the separation of sound-wave-driven spin currents from the spin Seebeck effect due to the heating of a sample caused by a sound-wave injection. Our demonstration using a Pt/YIG/PZT sample shows that the ASP signal in this structure measured by a conventional method is considerably offset by the heating signal and that the pure ASP signal is one order of magnitude greater than that reported in the previous study.
Technow, Frank; Messina, Carlos D; Totir, L Radu; Cooper, Mark
2015-01-01
Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics.
Valuing vaccines using value of statistical life measures.
Laxminarayan, Ramanan; Jamison, Dean T; Krupnick, Alan J; Norheim, Ole F
2014-09-03
Vaccines are effective tools to improve human health, but resources to pursue all vaccine-related investments are lacking. Benefit-cost and cost-effectiveness analysis are the two major methodological approaches used to assess the impact, efficiency, and distributional consequences of disease interventions, including those related to vaccinations. Childhood vaccinations can have important non-health consequences for productivity and economic well-being through multiple channels, including school attendance, physical growth, and cognitive ability. Benefit-cost analysis would capture such non-health benefits; cost-effectiveness analysis does not. Standard cost-effectiveness analysis may grossly underestimate the benefits of vaccines. A specific willingness-to-pay measure is based on the notion of the value of a statistical life (VSL), derived from trade-offs people are willing to make between fatality risk and wealth. Such methods have been used widely in the environmental and health literature to capture the broader economic benefits of improving health, but reservations remain about their acceptability. These reservations remain mainly because the methods may reflect ability to pay, and hence be discriminatory against the poor. However, willingness-to-pay methods can be made sensitive to income distribution by using appropriate income-sensitive distributional weights. Here, we describe the pros and cons of these methods and how they compare against standard cost-effectiveness analysis using pure health metrics, such as quality-adjusted life years (QALYs) and disability-adjusted life years (DALYs), in the context of vaccine priorities. We conclude that if appropriately used, willingness-to-pay methods will not discriminate against the poor, and they can capture important non-health benefits such as financial risk protection, productivity gains, and economic wellbeing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation
Technow, Frank; Messina, Carlos D.; Totir, L. Radu; Cooper, Mark
2015-01-01
Genomic selection, enabled by whole genome prediction (WGP) methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E), continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs) attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC), a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics. PMID:26121133
Non-aqueous solution preparation of doped and undoped lixmnyoz
Boyle, Timothy J.; Voigt, James A.
1997-01-01
A method for generation of phase-pure doped and undoped Li.sub.x Mn.sub.y O.sub.z precursors. The method of this invention uses organic solutions instead of aqueous solutions or nonsolution ball milling of dry powders to produce phase-pure precursors. These precursors can be used as cathodes for lithium-polymer electrolyte batteries. Dopants may be homogeneously incorporated to alter the characteristics of the powder.
Bitschnau, Achim; Alt, Volker; Böhner, Felicitas; Heerich, Katharina Elisabeth; Margesin, Erika; Hartmann, Sonja; Sewing, Andreas; Meyer, Christof; Wenisch, Sabine; Schnettler, Reinhard
2009-01-01
This is the first work to report on additional Arginin-Glycin-Aspartat (RGD) coating on precoated hydroxyapatite (HA) surfaces regarding new bone formation, implant bone contact, and biocompatibility compared to pure HA coating and uncoated stainless K-wires. There were 39 rabbits in total with 6 animals for the RGD-HA and HA group for the 4 week time period and 9 animals for each of the 3 implant groups for the 12 week observation. A 2.0 K-wire either with RGD-HA or with pure HA coating or uncoated was placed into the intramedullary canal of the tibia. After 4 and 12 weeks, the tibiae were harvested and three different areas of the tibia were assessed for quantitative and qualitative histology for new bone formation, direct implant bone contact, and formation of multinucleated giant cells. Both RGD-HA and pure HA coating showed statistically higher new bone formation and implant bone contact after 12 weeks than the uncoated K-wire. There were no significant differences between the RGD-HA and the pure HA coating in new bone formation and direct implant bone contact after 4 and 12 weeks. The number of multinucleated giant did not differ significantly between the RGD-HA and HA group after both time points. Overall, no significant effects of an additional RGD coating on HA surfaces were detected in this model after 12 weeks. (c) 2008 Wiley Periodicals, Inc.
Tan, Q Y; Xu, M L; Wu, J Y; Yin, H F; Zhang, J Q
2012-04-01
A novel pyridostigmine bromide poly (lactic acid) nanoparticles (PBPNPs) was prepared to obtain sustained release characteristics of PB. A central composite design approach was employed for process optimization. The in vitro release studies were carried out by dialysis method and conducted using four different dissolution media. Similar factor method was investigated for dissolution profile comparison. Multiple linear regression analysis for process optimization revealed that the optimal PBPNPs were obtained where the values of the amount of PB (X1, mg), PLA concentration (X2, % w:v), and PVA concentration (X3, % w:v) were 49.20 mg, 3.31% and 3.41%, respectively. The average particle size and zeta potential of PBPNPs with the optimized formulation were 722.9 +/- 4.3 nm, and -25.12 +/- 1.2 mV, respectively. PBPNPs provided an initial burst of drug release followed by a very slow release over an extended period of time (72 h). Compared with free PB, PBPNPs had a significantly lower release rate of PB in vitro. The in vitro release profile of the PBPNPs could be described by Weibull models, regardless of type of dissolution medium. Statistical significance of similarity between every two dissolution profiles of PBPNPs in different dissolution media was found, and the difference between the curves of PBPNPs and pure PB was statistically significant.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
El-Zaher, Asmaa A; Mahrouse, Marianne A
2013-01-01
A novel, selective, and sensitive reversed phase high-performance liquid chromatography (HPLC) method coupled with fluorescence detection has been developed for the determination of tobramycin (TOB) in pure form, in ophthalmic solution and in spiked human plasma. Since TOB lacks UV absorbing chromophores and native fluorescence, pre-column derivatization of TOB was carried out using fluorescamine reagent (0.01%, 1.5 mL) and borate buffer (pH 8.5, 2 mL). Experimental design was applied for optimization of the derivatization step. The resulting highly fluorescent stable derivative was chromatographed on C18 column and eluted using methanol:water (60:40, v/v) at a flow rate of 1 mL min(-1). A fluorescence detector (λex 390 and λem 480 nm) was used. The method was linear over the concentration range 20-200 ng mL(-1). The structure of the fluorescent product was proposed, the method was then validated and applied for the determination of TOB in human plasma. The results were statistically compared with the reference method, revealing no significant difference.
El-Zaher, Asmaa A.; Mahrouse, Marianne A.
2013-01-01
A novel, selective, and sensitive reversed phase high-performance liquid chromatography (HPLC) method coupled with fluorescence detection has been developed for the determination of tobramycin (TOB) in pure form, in ophthalmic solution and in spiked human plasma. Since TOB lacks UV absorbing chromophores and native fluorescence, pre-column derivatization of TOB was carried out using fluorescamine reagent (0.01%, 1.5 mL) and borate buffer (pH 8.5, 2 mL). Experimental design was applied for optimization of the derivatization step. The resulting highly fluorescent stable derivative was chromatographed on C18 column and eluted using methanol:water (60:40, v/v) at a flow rate of 1 mL min−1. A fluorescence detector (λex 390 and λem 480 nm) was used. The method was linear over the concentration range 20–200 ng mL−1. The structure of the fluorescent product was proposed, the method was then validated and applied for the determination of TOB in human plasma. The results were statistically compared with the reference method, revealing no significant difference. PMID:23700362
Topology in two dimensions. II - The Abell and ACO cluster catalogues
NASA Astrophysics Data System (ADS)
Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter
1992-09-01
We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
Khanna, Swati; Goyal, Arun; Moholkar, Vijayanand S
2013-01-01
This article addresses the issue of effect of fermentation parameters for conversion of glycerol (in both pure and crude form) into three value-added products, namely, ethanol, butanol, and 1,3-propanediol (1,3-PDO), by immobilized Clostridium pasteurianum and thereby addresses the statistical optimization of this process. The analysis of effect of different process parameters such as agitation rate, fermentation temperature, medium pH, and initial glycerol concentration indicated that medium pH was the most critical factor for total alcohols production in case of pure glycerol as fermentation substrate. On the other hand, initial glycerol concentration was the most significant factor for fermentation with crude glycerol. An interesting observation was that the optimized set of fermentation parameters was found to be independent of the type of glycerol (either pure or crude) used. At optimum conditions of agitation rate (200 rpm), initial glycerol concentration (25 g/L), fermentation temperature (30°C), and medium pH (7.0), the total alcohols production was almost equal in anaerobic shake flasks and 2-L bioreactor. This essentially means that at optimum process parameters, the scale of operation does not affect the output of the process. The immobilized cells could be reused for multiple cycles for both pure and crude glycerol fermentation.
Incorporation of High Energy Materials Into High Density Polymers
1987-09-21
and the pure graft copolymer was isolated by selective solvent extraction. 5 f. Isolation of pure Qraft copolymers. The isolation of pure EPDM -g-PS...characterized, such as EPDM -g-PST and EPDM -g-PMST. Two methods of synthesis were successful: a macromonomer (a polymer containing a polymerizab head group) was...copolymerized with ethylene and propylene to lead to the final product, and chlorination of a commercial EPDM allowed the chlorinated sites to serve as
NASA Astrophysics Data System (ADS)
Elghobashy, Mohamed R.; Bebawy, Lories I.; Shokry, Rafeek F.; Abbas, Samah S.
2016-03-01
A sensitive and selective stability-indicating successive ratio subtraction coupled with constant multiplication (SRS-CM) spectrophotometric method was studied and developed for the spectrum resolution of five component mixture without prior separation. The components were hydroquinone in combination with tretinoin, the polymer formed from hydroquinone alkali degradation, 1,4 benzoquinone and the preservative methyl paraben. The proposed method was used for their determination in their pure form and in pharmaceutical formulation. The zero order absorption spectra of hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben were determined at 293, 357.5, 245 and 255.2 nm, respectively. The calibration curves were linear over the concentration ranges of 4.00-46.00, 1.00-7.00, 0.60-5.20, and 1.00-7.00 μg mL- 1 for hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben, respectively. The pharmaceutical formulation was subjected to mild alkali condition and measured by this method resulting in the polymerization of hydroquinone and the formation of toxic 1,4 benzoquinone. The proposed method was validated according to ICH guidelines. The results obtained were statistically analyzed and compared with those obtained by applying the reported method.
Elghobashy, Mohamed R; Bebawy, Lories I; Shokry, Rafeek F; Abbas, Samah S
2016-03-15
A sensitive and selective stability-indicating successive ratio subtraction coupled with constant multiplication (SRS-CM) spectrophotometric method was studied and developed for the spectrum resolution of five component mixture without prior separation. The components were hydroquinone in combination with tretinoin, the polymer formed from hydroquinone alkali degradation, 1,4 benzoquinone and the preservative methyl paraben. The proposed method was used for their determination in their pure form and in pharmaceutical formulation. The zero order absorption spectra of hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben were determined at 293, 357.5, 245 and 255.2 nm, respectively. The calibration curves were linear over the concentration ranges of 4.00-46.00, 1.00-7.00, 0.60-5.20, and 1.00-7.00 μg mL(-1) for hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben, respectively. The pharmaceutical formulation was subjected to mild alkali condition and measured by this method resulting in the polymerization of hydroquinone and the formation of toxic 1,4 benzoquinone. The proposed method was validated according to ICH guidelines. The results obtained were statistically analyzed and compared with those obtained by applying the reported method. Copyright © 2015. Published by Elsevier B.V.
Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems
NASA Astrophysics Data System (ADS)
Gogolin, Christian; Eisert, Jens
2016-05-01
We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language.
Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems.
Gogolin, Christian; Eisert, Jens
2016-05-01
We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language.
Uzoaru, Ikechukwu; Morgan, Bradley R; Liu, Zheng G; Bellafiore, Frank J; Gaudier, Farah S; Lo, Jeanne V; Pakzad, Kourosh
2012-10-01
Flat epithelial atypia (FEA) of the breast have a tendency to calcify and, as such, are becoming increasingly detected by mammography. There is no consensus yet on whether to excise these lesions or not after diagnosis on core needle biopsies (CNB). We reviewed 3,948 cases of breast CNB between June 2004 and June 2009 correlating histomorphologic, radiological, and clinical features. There were 3.7 % (145/3,948) pure FEA and 1.5 % (58/3,948) concomitant FEA and atypical ductal hyperplasia (ADH). In the pure FEA population, 46.2 % (67/145) had microcalcifications on mammography with 65.5 % (95/145) of patients undergoing subsequent excisional biopsies with the following findings: benign 20 % (19/95), ADH 37.9 % (36/95), ductal carcinoma in situ (DCIS) 1.1 % (1/95), and DCIS and invasive ductal carcinoma (IDC) 2.1 % (2/95). In the concomitant FEA and ADH group, 86.2 % (50/58) patients had microcalcifications on radiograph with 74.1 % (43/58) of patients undergoing subsequent excisions with: benign 23.3 % (10/43), DCIS 9.3 % (4/43), DCIS and IDC 4.7 % (2/43), DCIS + lobular carcinoma in situ + invasive lobular carcinoma 2.3 % (1/43), and tubular carcinoma 2.3 % (1/43). The incidence of carcinoma in the FEA + ADH group is 18.6 % (8/43) and 3.2 % (3/95) for the pure FEA group. This difference is statistically significant (p = 0.0016). The relative risk of carcinoma in the ADH + FEA group versus the pure FEA group is 6.4773, with 95 % CI of 1.8432 and 22.76 24. Five-year mean follow-up in the unexcised pure FEA did not show any malignancies. These findings suggest that pure FEA has a very low association with carcinoma, and these patients may benefit from close clinical and mammographic follow-up while the combined pure FEA and ADH cases may be re-excised.
On prognostic models, artificial intelligence and censored observations.
Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A
2001-03-01
The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.
Miao, Qiang; Zheng, Yujun
2016-01-01
In this paper, the nature of the multi-order resonance and coherent destruction of tunneling (CDT) for two-level system driven cross avoided crossing is investigated by employing the emitted photons 〈N〉 and the Mandel’s Q parameter based on the photon counting statistics. An asymmetric feature of CDT is shown in the spectrum of Mandel’s Q parameter. Also, the CDT can be employed to suppress the spontaneous decay and prolong waiting time noticeably. The photon emission pattern is of monotonicity in strong relaxation, and homogeneity in pure dephasing regime, respectively. PMID:27353375
Re-formulation and Validation of Cloud Microphysics Schemes
NASA Astrophysics Data System (ADS)
Wang, J.; Georgakakos, K. P.
2007-12-01
The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.
Braun, T; Dochtermann, S; Krause, E; Schmidt, M; Schorn, K; Hempel, J M
2011-09-01
The present study analyzes the best combination of frequencies for the calculation of mean hearing loss in pure tone threshold audiometry for correlation with hearing loss for numbers in speech audiometry, since the literature describes different calculation variations for plausibility checking in expertise. Three calculation variations, A (250, 500 and 1000 Hz), B (500 and 1000 Hz) and C (500, 1000 and 2000 Hz), were compared. Audiograms in 80 patients with normal hearing, 106 patients with hearing loss and 135 expertise patients were analyzed in a retrospective manner. Differences between mean pure tone audiometry thresholds and hearing loss for numbers were calculated and statistically compared separately for the right and the left ear in the three patient collectives. We found the calculation variation A to be the best combination of frequencies, since it yielded the smallest standard deviations while being statistically different to calculation variations B and C. The 1- and 2.58-fold standard deviation (representing 68.3% and 99.0% of all values) was ±4.6 and ±11.8 dB for calculation variation A in patients with hearing loss, respectively. For plausibility checking in expertise, the mean threshold from the frequencies 250, 500 and 1000 Hz should be compared to the hearing loss for numbers. The common recommendation reported by the literature to doubt plausibility when the difference of these values exceeds ±5 dB is too strict as shown by this study.
Rapid Bedside Inactivation of Ebola Virus for Safe Nucleic Acid Tests.
Rosenstierne, Maiken Worsøe; Karlberg, Helen; Bragstad, Karoline; Lindegren, Gunnel; Stoltz, Malin Lundahl; Salata, Cristiano; Kran, Anne-Marte Bakken; Dudman, Susanne Gjeruldsen; Mirazimi, Ali; Fomsgaard, Anders
2016-10-01
Rapid bedside inactivation of Ebola virus would be a solution for the safety of medical and technical staff, risk containment, sample transport, and high-throughput or rapid diagnostic testing during an outbreak. We show that the commercially available Magna Pure lysis/binding buffer used for nucleic acid extraction inactivates Ebola virus. A rapid bedside inactivation method for nucleic acid tests is obtained by simply adding Magna Pure lysis/binding buffer directly into vacuum blood collection EDTA tubes using a thin needle and syringe prior to sampling. The ready-to-use inactivation vacuum tubes are stable for more than 4 months, and Ebola virus RNA is preserved in the Magna Pure lysis/binding buffer for at least 5 weeks independent of the storage temperature. We also show that Ebola virus RNA can be manually extracted from Magna Pure lysis/binding buffer-inactivated samples using the QIAamp viral RNA minikit. We present an easy and convenient method for bedside inactivation using available blood collection vacuum tubes and reagents. We propose to use this simple method for fast, safe, and easy bedside inactivation of Ebola virus for safe transport and routine nucleic acid detection. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Stability of disclination loop in pure twist nematic liquid crystals
NASA Astrophysics Data System (ADS)
Kadivar, Erfan
2018-04-01
In this work, the annihilations dynamics and stability of disclination loop in a bulk pure twist nematic liquid crystal are investigated. This work is based on the Frank free energy and the nematodynamics equations. The energy dissipation is calculated by using two methods. In the first method, the energy dissipation is obtained from the Frank free energy. In the second method, it is calculated by using the nematodynamics equations. Finally, we derive a critical radius of disclination loop that above this radius, loop creation is energetically forbidden.
Teaching group theory using Rubik's cubes
NASA Astrophysics Data System (ADS)
Cornock, Claire
2015-10-01
Being situated within a course at the applied end of the spectrum of maths degrees, the pure mathematics modules at Sheffield Hallam University have an applied spin. Pure topics are taught through consideration of practical examples such as knots, cryptography and automata. Rubik's cubes are used to teach group theory within a final year pure elective based on physical examples. Abstract concepts, such as subgroups, homomorphisms and equivalence relations are explored with the cubes first. In addition to this, conclusions about the cubes can be made through the consideration of algebraic approaches through a process of discovery. The teaching, learning and assessment methods are explored in this paper, along with the challenges and limitations of the methods. The physical use of Rubik's cubes within the classroom and examination will be presented, along with the use of peer support groups in this process. The students generally respond positively to the teaching methods and the use of the cubes.
Progressive statistics for studies in sports medicine and exercise science.
Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri
2009-01-01
Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.
Chern-Simons Term: Theory and Applications.
NASA Astrophysics Data System (ADS)
Gupta, Kumar Sankar
1992-01-01
We investigate the quantization and applications of Chern-Simons theories to several systems of interest. Elementary canonical methods are employed for the quantization of abelian and nonabelian Chern-Simons actions using ideas from gauge theories and quantum gravity. When the spatial slice is a disc, it yields quantum states at the edge of the disc carrying a representation of the Kac-Moody algebra. We next include sources in this model and their quantum states are shown to be those of a conformal family. Vertex operators for both abelian and nonabelian sources are constructed. The regularized abelian Wilson line is proved to be a vertex operator. The spin-statistics theorem is established for Chern-Simons dynamics using purely geometrical techniques. Chern-Simons action is associated with exotic spin and statistics in 2 + 1 dimensions. We study several systems in which the Chern-Simons action affects the spin and statistics. The first class of systems we study consist of G/H models. The solitons of these models are shown to obey anyonic statistics in the presence of a Chern-Simons term. The second system deals with the effect of the Chern -Simons term in a model for high temperature superconductivity. The coefficient of the Chern-Simons term is shown to be quantized, one of its possible values giving fermionic statistics to the solitons of this model. Finally, we study a system of spinning particles interacting with 2 + 1 gravity, the latter being described by an ISO(2,1) Chern-Simons term. An effective action for the particles is obtained by integrating out the gauge fields. Next we construct operators which exchange the particles. They are shown to satisfy the braid relations. There are ambiguities in the quantization of this system which can be exploited to give anyonic statistics to the particles. We also point out that at the level of the first quantized theory, the usual spin-statistics relation need not apply to these particles.
Method for solid state crystal growth
Nolas, George S.; Beekman, Matthew K.
2013-04-09
A novel method for high quality crystal growth of intermetallic clathrates is presented. The synthesis of high quality pure phase crystals has been complicated by the simultaneous formation of both clathrate type-I and clathrate type-II structures. It was found that selective, phase pure, single-crystal growth of type-I and type-II clathrates can be achieved by maintaining sufficient partial pressure of a chemical constituent during slow, controlled deprivation of the chemical constituent from the primary reactant. The chemical constituent is slowly removed from the primary reactant by the reaction of the chemical constituent vapor with a secondary reactant, spatially separated from the primary reactant, in a closed volume under uniaxial pressure and heat to form the single phase pure crystals.
[The application of the prospective space-time statistic in early warning of infectious disease].
Yin, Fei; Li, Xiao-Song; Feng, Zi-Jian; Ma, Jia-Qi
2007-06-01
To investigate the application of prospective space-time scan statistic in the early stage of detecting infectious disease outbreaks. The prospective space-time scan statistic was tested by mimicking daily prospective analyses of bacillary dysentery data of Chengdu city in 2005 (3212 cases in 102 towns and villages). And the results were compared with that of purely temporal scan statistic. The prospective space-time scan statistic could give specific messages both in spatial and temporal. The results of June indicated that the prospective space-time scan statistic could timely detect the outbreaks that started from the local site, and the early warning message was powerful (P = 0.007). When the merely temporal scan statistic for detecting the outbreak was sent two days later, and the signal was less powerful (P = 0.039). The prospective space-time scan statistic could make full use of the spatial and temporal information in infectious disease data and could timely and effectively detect the outbreaks that start from the local sites. The prospective space-time scan statistic could be an important tool for local and national CDC to set up early detection surveillance systems.
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
NASA Technical Reports Server (NTRS)
Shapiro, J. H.; Yuen, H. P.; Machado Mata, J. A.
1979-01-01
In a previous paper (1978), the authors developed a method of analyzing the performance of two-photon coherent state (TCS) systems for free-space optical communications. General theorems permitting application of classical point process results to detection and estimation of signals in arbitrary quantum states were derived. The present paper examines the general problem of photoemissive detection statistics. On the basis of the photocounting theory of Kelley and Kleiner (1964) it is shown that for arbitrary pure state illumination, the resulting photocurrent is in general a self-exciting point process. The photocount statistics for first-order coherent fields reduce to those of a special class of Markov birth processes, which the authors term single-mode birth processes. These general results are applied to the structure of TCS radiation, and it is shown that the use of TCS radiation with direct or heterodyne detection results in minimal performance increments over comparable coherent-state systems. However, significant performance advantages are offered by use of TCS radiation with homodyne detection. The abstract quantum descriptions of homodyne and heterodyne detection are derived and a synthesis procedure for obtaining quantum measurements described by arbitrary TCS is given.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesse, S.; Chi, M.; Belianinov, A.
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. In this paper, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO 3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in naturemore » and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. Finally, however, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.« less
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
Jesse, S.; Chi, M.; Belianinov, A.; ...
2016-05-23
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. In this paper, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO 3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in naturemore » and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. Finally, however, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.« less
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
Jesse, S.; Chi, M.; Belianinov, A.; Beekman, C.; Kalinin, S. V.; Borisevich, A. Y.; Lupini, A. R.
2016-01-01
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. Here, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in nature and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. However, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy. PMID:27211523
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
Mass production of highly-porous graphene for high-performance supercapacitors
NASA Astrophysics Data System (ADS)
Amiri, Ahmad; Shanbedi, Mehdi; Ahmadi, Goodarz; Eshghi, Hossein; Kazi, S. N.; Chew, B. T.; Savari, Maryam; Zubir, Mohd Nashrul Mohd
2016-09-01
This study reports on a facile and economical method for the scalable synthesis of few-layered graphene sheets by the microwave-assisted functionalization. Herein, single-layered and few-layered graphene sheets were produced by dispersion and exfoliation of functionalized graphite in ethylene glycol. Thermal treatment was used to prepare pure graphene without functional groups, and the pure graphene was labeled as thermally-treated graphene (T-GR). The morphological and statistical studies about the distribution of the number of layers showed that more than 90% of the flakes of T-GR had less than two layers and about 84% of T-GR were single-layered. The microwave-assisted exfoliation approach presents us with a possibility for a mass production of graphene at low cost and great potentials in energy storage applications of graphene-based materials. Owing to unique surface chemistry, the T-GR demonstrates an excellent energy storage performance, and the electrochemical capacitance is much higher than that of the other carbon-based nanostructures. The nanoscopic porous morphology of the T-GR-based electrodes made a significant contribution in increasing the BET surface as well as the specific capacitance of graphene. T-GR, with a capacitance of 354.1 Fg-1 at 5 mVs-1 and 264 Fg-1 at 100 mVs-1, exhibits excellent performance as a supercapacitor.
Whole-body and multispectral photoacoustic imaging of adult zebrafish
NASA Astrophysics Data System (ADS)
Huang, Na; Xi, Lei
2016-10-01
Zebrafish is a top vertebrate model to study developmental biology and genetics, and it is becoming increasingly popular for studying human diseases due to its high genome similarity to that of humans and the optical transparency in embryonic stages. However, it becomes difficult for pure optical imaging techniques to volumetric visualize the internal organs and structures of wild-type zebrafish in juvenile and adult stages with excellent resolution and penetration depth. Even with the establishment of mutant lines which remain transparent over the life cycle, it is still a challenge for pure optical imaging modalities to image the whole body of adult zebrafish with micro-scale resolution. However, the method called photoacoustic imaging that combines all the advantages of the optical imaging and ultrasonic imaging provides a new way to image the whole body of the zebrafish. In this work, we developed a non-invasive photoacoustic imaging system with optimized near-infrared illumination and cylindrical scanning to image the zebrafish. The lateral and axial resolution yield to 80 μm and 600 μm, respectively. Multispectral strategy with wavelengths from 690 nm to 930 nm was employed to image various organs inside the zebrafish. From the reconstructed images, most major organs and structures inside the body can be precisely imaged. Quantitative and statistical analysis of absorption for organs under illumination with different wavelengths were carried out.
Mullin, Lee; Gessner, Ryan; Kwan, James; Kaya, Mehmet; Borden, Mark A.; Dayton, Paul A.
2012-01-01
Purpose Microbubble contrast agents are currently implemented in a variety of both clinical and preclinical ultrasound imaging studies. The therapeutic and diagnostic capabilities of these contrast agents are limited by their short in-vivo lifetimes, and research to lengthen their circulation times is ongoing. In this manuscript, observations are presented from a controlled experiment performed to evaluate differences in circulation times for lipid shelled perfluorocarbon-filled contrast agents circulating within rodents as a function of inhaled anesthesia carrier gas. Methods The effects of two common anesthesia carrier gas selections - pure oxygen and medical air – were observed within five rats. Contrast agent persistence within the kidney was measured and compared for oxygen and air anesthesia carrier gas for six bolus contrast injections in each animal. Simulations were performed to examine microbubble behavior with changes in external environment gases. Results A statistically significant extension of contrast circulation time was observed for animals breathing medical air compared to breathing pure oxygen. Simulations support experimental observations and indicate that enhanced contrast persistence may be explained by reduced ventilation/perfusion mismatch and classical diffusion, in which nitrogen plays a key role by contributing to the volume and diluting other gas species in the microbubble gas core. Conclusion: Using medical air in place of oxygen as the carrier gas for isoflurane anesthesia can increase the circulation lifetime of ultrasound microbubble contrast agents. PMID:21246710
Schuurman, Tim; de Boer, Richard; Patty, Rachèl; Kooistra-Smid, Mirjam; van Zwet, Anton
2007-12-01
In the present study, three methods (NucliSens miniMAG [bioMérieux], MagNA Pure DNA Isolation Kit III Bacteria/Fungi [Roche], and a silica-guanidiniumthiocyanate {Si-GuSCN-F} procedure for extracting DNA from stool specimens were compared with regard to analytical performance (relative DNA recovery and down stream real-time PCR amplification of Salmonella enterica DNA), stability of the extracted DNA, hands-on time (HOT), total processing time (TPT), and costs. The Si-GuSCN-F procedure showed the highest analytical performance (relative recovery of 99%, S. enterica real-time PCR sensitivity of 91%) at the lowest associated costs per extraction (euro 4.28). However, this method did required the longest HOT (144 min) and subsequent TPT (176 min) when processing 24 extractions. Both miniMAG and MagNA Pure extraction showed similar performances at first (relative recoveries of 57% and 52%, S. enterica real-time PCR sensitivity of 85%). However, when difference in the observed Ct values after real-time PCR were taken into account, MagNA Pure resulted in a significant increase in Ct value compared to both miniMAG and Si-GuSCN-F (with on average +1.26 and +1.43 cycles). With regard to inhibition all methods showed relatively low inhibition rates (< 4%), with miniMAG providing the lowest rate (0.7%). Extracted DNA was stable for at least 1 year for all methods. HOT was lowest for MagNA Pure (60 min) and TPT was shortest for miniMAG (121 min). Costs, finally, were euro 4.28 for Si-GuSCN, euro 6.69 for MagNA Pure and euro 9.57 for miniMAG.
Inhibitory effect of Ti-Ag alloy on artificial biofilm formation.
Nakajo, Kazuko; Takahashi, Masatoshi; Kikuchi, Masafumi; Takada, Yukyo; Okuno, Osamu; Sasaki, Keiichi; Takahashi, Nobuhiro
2014-01-01
Titanium-silver (Ti-Ag) alloy has been improved for machinability and mechanical properties, but its anti-biofilm properties have not been elucidated yet. Thus, this study aimed to evaluate the effects of Ti-Ag alloy on biofilm formation and bacterial viability in comparison with pure Ti, pure Ag and silver-palladium (Ag-Pd) alloy. Biofilm formation on the metal plates was evaluated by growing Streptococcus mutans and Streptococcus sobrinus in the presence of metal plates. Bactericidal activity was evaluated using a film contact method. There were no significant differences in biofilm formation between pure Ti, pure Ag and Ag-Pd alloy, while biofilm amounts on Ti-20% Ag and Ti-25% Ag alloys were significantly lower (p<0.05). In addition, Ti-Ag alloys and pure Ti were not bactericidal, although pure Ag and Ag-Pd alloy killed bacteria. These results suggest that Ti-20% Ag and Ti-25% Ag alloys are suitable for dental material that suppresses biofilm formation without disturbing healthy oral microflora.
Ahn, Joong Ho; Lee, Hyo-Sook; Kim, Young-Jin; Yoon, Tae Hyun; Chung, Jong Woo
2007-06-01
To compare pure-tone audiometry and auditory steady state response (ASSR) to measure hearing loss based on the severity of hearing loss in frequencies. A total of 105 subjects (168 ears, 64 male and 41 female) were enrolled in this study. We determined hearing level by measurement of pure-tone audiometry and ASSR on the same day for each subject. Pure-tone audiometry and ASSR were highly correlated (r=0.96). The relationship is described by the equation PTA=1.05 x mean ASSR - 7.6. When analyzed according to the frequencies, the correlation coefficients were 0.94, 0.95, 0.94, and 0.92 for 0.5, 1, 2, and 4 kHz, respectively. From this study, authors could conclude that pure-tone audiometry and ASSR showed very similar results and indicated that ASSR may be a good alternative method for the measurement of hearing level in infants and children, for whom pure-tone audiometry is not appropriate.
Gate-Driven Pure Spin Current in Graphene
NASA Astrophysics Data System (ADS)
Lin, Xiaoyang; Su, Li; Si, Zhizhong; Zhang, Youguang; Bournel, Arnaud; Zhang, Yue; Klein, Jacques-Olivier; Fert, Albert; Zhao, Weisheng
2017-09-01
The manipulation of spin current is a promising solution for low-power devices beyond CMOS. However, conventional methods, such as spin-transfer torque or spin-orbit torque for magnetic tunnel junctions, suffer from large power consumption due to frequent spin-charge conversions. An important challenge is, thus, to realize long-distance transport of pure spin current, together with efficient manipulation. Here, the mechanism of gate-driven pure spin current in graphene is presented. Such a mechanism relies on the electrical gating of carrier-density-dependent conductivity and spin-diffusion length in graphene. The gate-driven feature is adopted to realize the pure spin-current demultiplexing operation, which enables gate-controllable distribution of the pure spin current into graphene branches. Compared with the Elliott-Yafet spin-relaxation mechanism, the D'yakonov-Perel spin-relaxation mechanism results in more appreciable demultiplexing performance. The feature of the pure spin-current demultiplexing operation will allow a number of logic functions to be cascaded without spin-charge conversions and open a route for future ultra-low-power devices.
NASA Astrophysics Data System (ADS)
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
Krefeld-Schwalb, Antonia; Witte, Erich H; Zenker, Frank
2018-01-01
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H 0 -hypothesis to a statistical H 1 -verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a "pure" Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.
ERIC Educational Resources Information Center
Alfano, Candice A.
2012-01-01
Despite the approach of the "Diagnostic and Statistical Manual of Mental Disorders" (5th ed.), generalized anxiety disorder (GAD) of childhood continues to face questions as to whether it should be considered a distinct clinical disorder. A potentially critical issue embedded in this debate involves the role of functional impairment which has yet…
Anatomising proton NMR spectra with pure shift 2D J-spectroscopy: A cautionary tale
NASA Astrophysics Data System (ADS)
Kiraly, Peter; Foroozandeh, Mohammadali; Nilsson, Mathias; Morris, Gareth A.
2017-09-01
Analysis of proton NMR spectra has been a key tool in structure determination for over 60 years. A classic tool is 2D J-spectroscopy, but common problems are the difficulty of obtaining the absorption mode lineshapes needed for accurate results, and the need for a 45° shear of the final 2D spectrum. A novel 2D NMR method is reported here that allows straightforward determination of homonuclear couplings, using a modified version of the PSYCHE method to suppress couplings in the direct dimension. The method illustrates the need for care when combining pure shift data acquisition with multiple pulse methods.
NASA Astrophysics Data System (ADS)
Ashour, Safwan; Bayram, Roula
2012-12-01
New, simple and rapid spectrophotometric method has been developed and validated for the assay of two macrolide drugs, azithromycin (AZT) and erythromycin (ERY) in pure and pharmaceutical formulations. The proposed method was based on the reaction of AZT and ERY with sodium 1,2-naphthoquinone-4-sulphonate (NQS) in alkaline medium at 25 °C to form an orange-colored product of maximum absorption peak at 452 nm. All variables were studied to optimize the reaction conditions and the reaction mechanism was postulated. Beer's law was obeyed in the concentration range 1.5-33.0 and 0.92-8.0 μg mL-1 with limit of detection values of 0.026 and 0.063 μg mL-1 for AZT and ERY, respectively. The calculated molar absorptivity values are 4.3 × 104 and 12.3 × 104 L mol-1 cm-1 for AZT and ERY, respectively. The proposed methods were successfully applied to the determination of AZT and ERY in formulations and the results tallied well with the label claim. The results were statistically compared with those of an official method by applying the Student's t-test and F-test. No interference was observed from the concomitant substances normally added to preparations.
A search for evidence of solar rotation in Super-Kamiokande solar neutrino dataset
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Liu, Dawei W.
2016-09-01
We apply the generalized Lomb-Scargle (LS) periodogram, proposed by Zechmeister and Kurster, to the solar neutrino data from Super-Kamiokande (Super-K) using data from its first five years. For each peak in the LS periodogram, we evaluate the statistical significance in two different ways. The first method involves calculating the False Alarm Probability (FAP) using non-parametric bootstrap resampling, and the second method is by calculating the difference in Bayesian Information Criterion (BIC) between the null hypothesis, viz. the data contains only noise, compared to the hypothesis that the data contains a peak at a given frequency. Using these methods, we scan the frequency range between 7-14 cycles per year to look for any peaks caused by solar rotation, since this is the proposed explanation for the statistically significant peaks found by Sturrock and collaborators in the Super-K dataset. From our analysis, we do confirm that similar to Sturrock et al, the maximum peak occurs at a frequency of 9.42/year, corresponding to a period of 38.75 days. The FAP for this peak is about 1.5% and the difference in BIC (between pure white noise and this peak) is about 4.8. We note that the significance depends on the frequency band used to search for peaks and hence it is important to use a search band appropriate for solar rotation. However, The significance of this peak based on the value of BIC is marginal and more data is needed to confirm if the peak persists and is real.
Sakamoto, Torao; Horiuchi, Akira; Nakayama, Yoshiko
2013-08-01
Endoscopic evaluation of swallowing (EES) is not commonly used by gastroenterologists to evaluate swallowing in patients with dysphagia. To use transnasal endoscopy to identify factors predicting successful or failed swallowing of pureed foods in elderly patients with dysphagia. EES of pureed foods was performed by a gastroenterologist using a small-calibre transnasal endoscope. Factors related to successful versus unsuccessful swallowing of pureed foods were analyzed with regard to age, comorbid diseases, swallowing activity, saliva pooling, vallecular residues, pharyngeal residues and airway penetration⁄aspiration. Unsuccessful swallowing was defined in patients who could not eat pureed foods at bedside during hospitalization. Logistic regression analysis was used to identify independent predictors of swallowing of pureed foods. During a six-year period, 458 consecutive patients (mean age 80 years [range 39 to 97 years]) were considered for the study, including 285 (62%) men. Saliva pooling, vallecular residues, pharyngeal residues and penetration⁄aspiration were found in 240 (52%), 73 (16%), 226 (49%) and 232 patients (51%), respectively. Overall, 247 patients (54%) failed to swallow pureed foods. Multivariate logistic regression analysis demonstrated that the presence of pharyngeal residues (OR 6.0) and saliva pooling (OR 4.6) occurred significantly more frequently in patients who failed to swallow pureed foods. Pharyngeal residues and saliva pooling predicted impaired swallowing of pureed foods. Transnasal EES performed by a gastroenterologist provided a unique bedside method of assessing the ability to swallow pureed foods in elderly patients with dysphagia.
Monzote, L; Pastor, J; Scull, R; Gille, L
2014-01-01
Chenopodium ambrosioides have been used during centuries by native people to treat parasitic diseases. To compare the in vivo anti-leishmanial activity of the essential oil (EO) from C. ambrosioides and its major components (ascaridole, carvacrol and caryophyllene oxide). Anti-leishmanial effect was evaluated in BALB/c mice infected with Leishmania amazonensis and treated with the EO, main compounds and artificial mix of pure components by intralesional route at 30 mg/kg every 4 days during 14 days. Diseases progression and parasite burden in infected tissues were determined. EO prevented lesion development compared (p<0.05) with untreated animals and treated with vehicle. In addition, the efficacy of EO was also statistically superior (p<0.05) compared with the glucantime-treated animals. No potential effects were observed with pure components treatment. Mix of pure compounds cause death of animals after 3 days of treatment. Our results demonstrate the superiority of EO against experimental cutaneous leishmaniasis caused by L. amazonensis. Copyright © 2014 Elsevier GmbH. All rights reserved.
Device-independent characterizations of a shared quantum state independent of any Bell inequalities
NASA Astrophysics Data System (ADS)
Wei, Zhaohui; Sikora, Jamie
2017-03-01
In a Bell experiment two parties share a quantum state and perform local measurements on their subsystems separately, and the statistics of the measurement outcomes are recorded as a Bell correlation. For any Bell correlation, it turns out that a quantum state with minimal size that is able to produce this correlation can always be pure. In this work, we first exhibit two device-independent characterizations for the pure state that Alice and Bob share using only the correlation data. Specifically, we give two conditions that the Schmidt coefficients must satisfy, which can be tight, and have various applications in quantum tasks. First, one of the characterizations allows us to bound the entanglement between Alice and Bob using Renyi entropies and also to bound the underlying Hilbert space dimension. Second, when the Hilbert space dimension bound is tight, the shared pure quantum state has to be maximally entangled. Third, the second characterization gives a sufficient condition that a Bell correlation cannot be generated by particular quantum states. We also show that our results can be generalized to the case of shared mixed states.
A Comparison of Pure and Comorbid CD/ODD and Depression
ERIC Educational Resources Information Center
Ezpeleta, Lourdes; Domenech, Josep M.; Angold, Adrian
2006-01-01
Background: We studied the symptomatology of conduct/oppositional defiant disorder and major depression/dysthymic disorder in "pure" and comorbid presentations. Method: The sample comprised 382 children of 8 to 17 years of age attending for psychiatric outpatient consultation. Ninety-two had depressive disorders without conduct disorders, 165…
Takatsu, Akiko
2009-06-01
There is an increasing demand to establish a metrological traceability system for in vitro diagnostics and medical devices. Pure substance-type reference materials are playing key roles in metrological traceability, because they form the basis for many traceability chains in chemistry. The National Metrology Institute of Japan (NMIJ), in the National Institute of Advanced Industrial Science and Technology (AIST), has been developing purity-certified reference materials (CRMs) in this field, such as cholesterol, creatinine, and urea. In the New Energy and Industrial Technology Development Organization (NEDO) project, entitled: "Research and Development to Promote the Creation and Utilization of an Intellectual Infrastructure: Development of Reference Materials for Laboratory Medicine", several pure substance-type CRMs were developed. For a pure protein solution CRM, amino acid analysis and nitrogen determination were chosen as the certification methods. The development and certification processes for the C-reactive protein (CRP) solution CRM were completed, with the recombinant human CRP solution as a candidate material. This CRP solution CRM is now available as NMIJ CRM. For cortisol CRM, a purified candidate material and highly pure primary reference material were prepared. Each impure compound in the materials was identified and quantified. The pure cortisol CRM will be available in 2009. These two CRMs provide a traceability link between routine clinical methods and the SI unit.
Augustine, Robin; Ashkenazi, Dana Levin; Arzi, Roni Sverdlov; Zlobin, Vita; Shofti, Rona; Sosnik, Alejandro
2018-05-01
Nanonizationhas been extensively investigated to increase theoral bioavailability of hydrophobicdrugsin general andantiretrovirals(ARVs)used inthe therapy of the human immunodeficiency virus (HIV) infection in particular. Weanticipatedthatin the caseofprotease inhibitors, a family of pH-dependent ARVsthatdisplay high aqueous solubility undertheacidconditionsof thestomach andextremely low solubilityunder the neutral ones ofthe small intestine, this strategy might failowing to an uncontrolled dissolution-re-precipitation process that will take place along the gastrointestinal tract.To tackle thisbiopharmaceutical challenge, in this work, wedesigned, produced and fully characterized a novelNanoparticle-in-MicroparticleDelivery System(NiMDS)comprised of pure nanoparticlesofthefirst-line protease inhibitor darunavir(DRV) and itsboosting agentritonavir (RIT) encapsulated within film-coated microparticles.For this, a clinically relevant combination of pure DRV and RIT nanoparticles wassynthesized by a sequential nanoprecipitation/solvent diffusion and evaporation method employing sodium alginateas viscosity stabilizer. Then, pure nanoparticles were encapsulated within calcium alginate/chitosanmicroparticlesthat were film-coated with a series ofpoly(methacrylate) copolymers with differential solubility in the gastrointestinal tract. This coating ensured full stability under gastric-like pH and sustained drug release under intestinal one. PharmacokineticstudiesconductedinalbinoSpragueDawleyratsshowed that DRV/RIT-loadedNiMDSs containing 17% w/w drug loading based on dry weight significantlyincreasedthe oral bioavailabilityof DRVby 2.3-foldwith respect to both theunprocessedandthenanonized DRV/RIT combinations that showed statistically similar performance. Moreover, they highlighted the limited advantage of only drugnanonizationto improve the oral pharmacokinetics of protease inhibitors and the potential of our novel delivery approach to improve the oral pharmacokinetics of nanonized poorly water-soluble drugs displaying pH-dependent solubility. Protease inhibitors (PIs) are gold-standard drugs in many ARV cocktails. Darunavir (DRV) is the latest approved PI and it is included in the 20th WHO Model List of Essential Medicines. PIs poorly-water soluble at intestinal pH and more soluble under gastric conditions. Drug nanonization represents one of the most common nanotechnology strategies to increase dissolution rate of hydrophobic drugs and thus, their oral bioavailability. For instance, pure drug nanosuspensions became the most clinically relevant nanoformulation. However, according to the physicochemical properties of PIs, nanonization does not appear as a very beneficial strategy due to the fast dissolution rate anticipated under the acid conditions of the stomach and their uncontrolled recrystallization and precipitation in the small intestine that might result in the formation of particles of unpredictable size and structure (e.g., crystallinity and polymorphism) and consequently, unknown dissolution rate and bioavailability. In this work, we developed a sequential nanoprecipitation method for the production of pure nanoparticles of DRV and its boosting agent ritonavir in a clinically relevant 8:1 wt ratio using alginate as viscosity stabilizer and used this nanosuspension to produce a novel kind of nanoparticle-in-microparticle delivery system that was fully characterized and the pharmacokinetics assessed in rats. The most significant points of the current manuscript are. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Stefanidis, Gerasimos; Karamanolis, George; Viazis, Nikos; Sgouros, Spiros; Papadopoulou, Efthimia; Ntatsakis, Konstantinos; Mantides, Apostolos; Nastos, Helias
2003-02-01
Whether the type of electrosurgical current used for endoscopic sphincterotomy influences the frequency of postsphincterotomy complications is unknown. One hundred eighty-six patients with choledocholithiasis were prospectively randomized to undergo endoscopic sphincterotomy with pure cutting current (n = 62, Group A), blended current (n = 62, Group B), or pure cutting initially followed by blended current (n = 62, Group C). Serum concentrations of amylase and lipase were evaluated in all patients 12 and 24 hours after sphincterotomy. Clinical pancreatitis was classified as mild, moderate, or severe. Postsphincterotomy bleeding was defined as a decrease in hematocrit of greater than 5%. Serum concentrations of amylase and lipase were greater in Groups B and C at 12 and 24 hours after the procedure, as compared with Group A. Clinical mild pancreatitis occurred in 2 patients in Group A (3.2%), 8 in Group B (12.9%), and in 8 in Group C (12.9%). The differences were statistically significant for Group A compared with either Group B or Group C (p = 0.048). Postsphincterotomy bleeding occurred in 3 patients (1.6%), one in each group. The use of pure cutting electrosurgical current during endoscopic sphincterotomy in patients with choledocholithiasis is associated with a lesser degree of pancreatic enzyme elevation and lower frequency of pancreatitis, whereas bleeding is not increased compared with blended current. Changing from pure cutting to blended current after the first 3 to 5 mm of the incision is associated with an increased rate of complications compared to the use of pure cutting current for the entire sphincterotomy.
Amali, Amin; Mahdi, Parvane; Karimi Yazdi, Alireza; Khorsandi Ashtiyani, Mohammad Taghi; Yazdani, Nasrin; Vakili, Varasteh; Pourbakht, Akram
2014-01-01
Vestibular involvements have long been observed in otosclerotic patients. Among vestibular structures saccule has the closest anatomical proximity to the sclerotic foci, so it is the most prone vestibular structure to be affected during the otosclerosis process. The aim of this study was to investigate the saccular function in patients suffering from otosclerosis, by means of Vestibular Evoked Myogenic Potential (VEMP). The material consisted of 30 otosclerosis patients and 20 control subjects. All participants underwent audiometric and VEMP testing. Analysis of tests results revealed that the mean values of Air-Conducted Pure Tone Average (AC-PTA) and Bone-Conducted Pure Tone Average (BC-PTA) in patients were 45.28 ± 15.57 and 19.68 ± 10.91, respectively and calculated 4 frequencies Air Bone Gap (ABG) was 25.64 ± 9.95. The VEMP response was absent in 14 (28.57%) otosclerotic ears. A statistically significant increase in latency of the p13 was found in the affected ears (P=0.004), differences in n23 latency did not reach a statistically significant level (P=0.112). Disparities in amplitude of p13-n23 in between two study groups was statistically meaningful (P=0.009), indicating that the patients with otosclerosis had lower amplitudes. This study tends to suggest that due to the direct biotoxic effect of the materials released from the otosclerosis foci on saccular receptors, there might be a possibility of vestibular dysfunction in otosclerotic patients.
Mumtaz, Amina; Hussain, Shahid; Yasir, Muhammad
2014-09-01
A simple eco-friendly method has been developed for detection of hydroxyzine dihydrochloride in pure and pharmaceutical dosage forms. Both conventional system and microwave assisted procedures are used for the development of color. The blue coloured complex is measured spectrophotometrically at 750nm. Peak shift in FT-IR spectra also indicated the formation of complex. The reaction obeys Beer's law over the concentration range of 50- 250βg/mL of hydroxyzine dihydrochloride. The precision value (intra-day and inter-day RSD) for the drug is not greater than 0.79% and recoveries were found to be in range of 99.01-99.99%. The designed method is applicable for periodic determination of hydroxyzine dihydrochloride in pure and pharmaceutical dosage forms.
Measurement of complete and continuous Wigner functions for discrete atomic systems
NASA Astrophysics Data System (ADS)
Tian, Yali; Wang, Zhihui; Zhang, Pengfei; Li, Gang; Li, Jie; Zhang, Tiancai
2018-01-01
We measure complete and continuous Wigner functions of a two-level cesium atom in both a nearly pure state and highly mixed states. We apply the method [T. Tilma et al., Phys. Rev. Lett. 117, 180401 (2016), 10.1103/PhysRevLett.117.180401] of strictly constructing continuous Wigner functions for qubit or spin systems. We find that the Wigner function of all pure states of a qubit has negative regions and the negativity completely vanishes when the purity of an arbitrary mixed state is less than 2/3 . We experimentally demonstrate these findings using a single cesium atom confined in an optical dipole trap, which undergoes a nearly pure dephasing process. Our method can be applied straightforwardly to multi-atom systems for measuring the Wigner function of their collective spin state.
Effects of Bluetooth device electromagnetic field on hearing: pilot study.
Balachandran, R; Prepageran, N; Prepagaran, N; Rahmat, O; Zulkiflee, A B; Hufaida, K S
2012-04-01
The Bluetooth wireless headset has been promoted as a 'hands-free' device with a low emission of electromagnetic radiation. To evaluate potential changes in hearing function as a consequence of using Bluetooth devices, by assessing changes in pure tone audiography and distortion production otoacoustic emissions. Prospective study. Thirty adult volunteers were exposed to a Bluetooth headset device (1) on 'standby' setting for 6 hours and (2) at full power for 10 minutes. Post-exposure hearing was evaluated using pure tone audiography and distortion production otoacoustic emission testing. There were no statistically significant changes in hearing, as measured above, following either exposure type. Exposure to the electromagnetic field emitted by a Bluetooth headset, as described above, did not decrease hearing thresholds or alter distortion product otoacoustic emissions.
Prashanth, Kudige Nagaraj; Swamy, Nagaraju; Basavaiah, Kanakapura
2016-01-01
Two simple and selective spectrophotometric methods are described for the determination of trifluoperazine dihydrochloride (TFH) as base form (TFP) in bulk drug, and in tablets. The methods are based on the molecular charge-transfer complexation of trifluoperazine base (TFP) with either 2,4,6-trinitrophenol (picric acid; PA) or 2,4-dinitrophenol (DNP). The yellow colored radical anions formed are quantified at 410 run (PA method) or 415 nm (DNP method). The assay conditions were optimized for both the methods. Beer's law is obeyed over the concentration ranges of 1.5-24.0 pg/mL in PA method and 5.0-80.0 µg/mL in DNP method, with respective molar absorptivity values of 1.03 x 10(4) and 6.91 x 10(3) L mol-1 cm-1. The reaction stoichiometry in both methods was evaluated by Job's method of continuous variations and was found to be 1 : 2 (TFP : PA, TFP : DNP). The developed methods were successfully applied to the determination of TFP in pure form and commercial tablets with good accuracy and precision. Statistical comparison of the results was performed using Student's t-test and F-ratio at 95% confidence level and the results showed no significant difference between the reference and proposed methods with regard to accuracy and precision. Further, the accuracy and reliability of the methods were confirmed by recovery studies via standard addition technique.
Characterization of pure and composite resorcinol formaldehyde aerogels doped with silver
NASA Astrophysics Data System (ADS)
Attia, S. M.; Abdelfatah, M. S.; Mossad, M. M.
2017-07-01
A series of Resorcinol Formaldehyde (RF) aerogels composites with nanoparticles of sliver were prepared by the sol-gel method at different concentrations doped silver. FTIR spectra of pure and composite RF aerogels show six absorption bands attributed to -OH groups bonded to the benzene ring, stretching of -CH2- bonds and aromatic ring stretching. FTIR results ensured that sliver particles do not interact with aerogel network. UV-visible spectrum of pure silver show an absorbance peak at 420 nm attributed to the surface plasmon excitation of sliver Nano spheres. UV-visible spectral of pure and composite RF aerogels shows a steep decrease of absorption with wavelength after 500 nm, making sample’s color reddish brown. TEM and SEM images of pure and composite RF aerogels revealed that the textural arrangement of RF aerogels can be described as densely packed small nodules.
Abu El-Enin, Mohammed Abu Bakr; Al-Ghaffar Hammouda, Mohammed El-Sayed Abd; El-Sherbiny, Dina Tawfik; El-Wasseef, Dalia Rashad; El-Ashry, Saadia Mahmoud
2016-02-01
A valid, sensitive and rapid spectrofluorimetric method has been developed and validated for determination of both tadalafil (TAD) and vardenafil (VAR) either in their pure form, in their tablet dosage forms or spiked in human plasma. This method is based on measurement of the native fluorescence of both drugs in acetonitrile at λem 330 and 470 nm after excitation at 280 and 275 nm for tadalafil and vardenafil, respectively. Linear relationships were obtained over the concentration range 4-40 and 10-250 ng/mL with a minimum detection of 1 and 3 ng/mL for tadalafil and vardenafil, respectively. Various experimental parameters affecting the fluorescence intensity were carefully studied and optimized. The developed method was applied successfully for the determination of tadalafil and vardenafil in bulk drugs and tablet dosage forms. Moreover, the high sensitivity of the proposed method permitted their determination in spiked human plasma. The developed method was validated in terms of specificity, linearity, lower limit of quantification (LOQ), lower limit of detection (LOD), precision and accuracy. The mean recoveries of the analytes in pharmaceutical preparations were in agreement with those obtained from the comparison methods, as revealed by statistical analysis of the obtained results using Student's t-test and the variance ratio F-test. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
Non-aqueous solution preparation of doped and undoped Li{sub x}Mn{sub y}O{sub z}
Boyle, T.J.; Voigt, J.A.
1997-05-20
A method is described for generation of phase-pure doped and undoped Li{sub x}Mn{sub y}O{sub z} precursors. The method of this invention uses organic solutions instead of aqueous solutions or nonsolution ball milling of dry powders to produce phase-pure precursors. These precursors can be used as cathodes for lithium-polymer electrolyte batteries. Dopants may be homogeneously incorporated to alter the characteristics of the powder. 1 fig.
Bauzá, Antonio; Alkorta, Ibon; Frontera, Antonio; Elguero, José
2013-11-12
In this article, we report a comprehensive theoretical study of halogen, chalcogen, and pnicogen bonding interactions using a large set of pure and hybrid functionals and some ab initio methods. We have observed that the pure and some hybrid functionals largely overestimate the interaction energies when the donor atom is anionic (Cl(-) or Br(-)), especially in the halogen bonding complexes. To evaluate the reliability of the different DFT (BP86, BP86-D3, BLYP, BLYP-D3, B3LYP, B97-D, B97-D3, PBE0, HSE06, APFD, and M06-2X) and ab initio (MP2, RI-MP2, and HF) methods, we have compared the binding energies and equilibrium distances to those obtained using the CCSD(T)/aug-cc-pVTZ level of theory, as reference. The addition of the latest available correction for dispersion (D3) to pure functionals is not recommended for the calculation of halogen, chalcogen, and pnicogen complexes with anions, since it further contributes to the overestimation of the binding energies. In addition, in chalcogen bonding interactions, we have studied how the hybridization of the chalcogen atom influences the interaction energies.
Statistics of Experiments on Cluster Formation and Transport in a Gravitational Field
NASA Technical Reports Server (NTRS)
Izmailov, Alexander F.; Myerson, Allan S.
1993-01-01
Metastable state relaxation in a gravitational field is investigated in the case of non-critical binary solutions. A relaxation description is presented in terms of the time-dependent Ginzburg-Landau formalism for a non-conserved order parameter. A new ansatz for solution of the corresponding partial nonlinear stochastic differential equation is discussed. It is proved that, for the supersaturated solution under consideration, the metastable state relaxation in a gravitational field leads to formation of solute concentration gradients due to the sedimentation of subcritical solute clusters. The pure discussion of the possible methods to compare theoretical results and experimental data related to solute sedimentation in a gravitational field is presented. It is shown that in order to describe these experiments it is necessary to deal both with the value of the solute concentration gradient and with its formation rate. The stochastic nature of the sedimentation process is shown.
The thermodynamic properties of normal liquid helium 3
NASA Astrophysics Data System (ADS)
Modarres, M.; Moshfegh, H. R.
2009-09-01
The thermodynamic properties of normal liquid helium 3 are calculated by using the lowest order constrained variational (LOCV) method. The Landau Fermi liquid model and Fermi-Dirac distribution function are considered as our statistical model for the uncorrelated quantum fluid picture and the Lennard-Jones and Aziz potentials are used in our truncated cluster expansion (LOCV) to calculate the correlated energy. The single particle energy is treated variationally through an effective mass. The free energy, pressure, entropy, chemical potential and liquid phase diagram as well as the helium 3 specific heat are evaluated, discussed and compared with the corresponding available experimental data. It is found that the critical temperature for the existence of the pure gas phase is about 4.90 K (4.45 K), which is higher than the experimental prediction of 3.3 K, and the helium 3 flashing temperature is around 0.61 K (0.50 K) for the Lennard-Jones (Aziz) potential.
Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.
Natarajan, Shreedhar; Jakobsson, Eric
2009-06-12
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.
[Hearing the impact of MP3 on a survey of middle school students].
Xu, Zhan; Li, Zonghua; Chen, Yang; He, Ya; Chunyu, Xiujie; Wang, Fangyuan; Zhang, Pengzhi; Gao, Lei; Qiu, Shuping; Liu, Shunli; Qiao, Li; Qiu, Jianhua
2011-02-01
To understand the usage of MP3 and effects on hearing of middle school students in Xi'an, and discuss controlling strategies. Stratified random cluster sampling method was used in the 1567 middle school students in Xi'an through questionnaire survey, ear examination and hearing examination, data were analysed by the SPSS13.0 statistical software. 1) The rate of holding MP3 in the middle school students was 85.2%. Average daily use time was (1.41 +/- 1.11) h. 2) The noise group of pure tone hearing threshold was significantly higher compared with the control group (P<0.01), and increased the detection rate of hearing loss with the increasing use of MP3. 3) The detection rate of symptoms increased with the increasing use of MP3. The usage of MP3 can harm hearing in middle school students, which can result in neurasthenic syndrome.
NASA Astrophysics Data System (ADS)
Song, Yuejun; Huang, Yanhe; Jie, Yang
2017-08-01
The soil and water loss in Pinus massoniana forests is an urgent environmental problem in the red soil region of southern China.Using the method of field monitoring, by analogy and statistical analysis, The characteristics of soil and water loss of Pinus massoniana forests in Quaternary red soil region under 30 rainfall were analyzed,the results show that the relationship models of rainfall,runoff and sediment of pure Pinus massoniana plot were slightly different from the naked control plot,were all the univariate quadratic linear regression models.the contribution of runoff and sediment in different rain types were different, and the water and soil loss in Pinus massoniana forest was most prominent under moderate rain.The merging effect of sparse Pinus massoniana forest on raindrop, aggravated the degree of soil and water loss to some extent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-01-05
SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less
Functional Equivalency Inferred from “Authoritative Sources” in Networks of Homologous Proteins
Natarajan, Shreedhar; Jakobsson, Eric
2009-01-01
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods. PMID:19521530
Solubility of gases and liquids in glassy polymers.
De Angelis, Maria Grazia; Sarti, Giulio C
2011-01-01
This review discusses a macroscopic thermodynamic procedure to calculate the solubility of gases, vapors, and liquids in glassy polymers that is based on the general procedure provided by the nonequilibrium thermodynamics for glassy polymers (NET-GP) method. Several examples are presented using various nonequilibrium (NE) models including lattice fluid (NELF), statistical associating fluid theory (NE-SAFT), and perturbed hard sphere chain (NE-PHSC). Particular applications illustrate the calculation of infinite-dilution solubility coefficients in different glassy polymers and the prediction of solubility isotherms for different gases and vapors in pure polymers as well as in polymer blends. The determination of model parameters is discussed, and the predictive abilities of the models are illustrated. Attention is also given to the solubility of gas mixtures and solubility isotherms in nanocomposite mixed matrices. The fractional free volume determined from solubility data can be used to correlate solute diffusivities in mixed matrices.
[Longterm results of mitral valve replacement (author's transl)].
Erhard, W; Reichmann, M; Delius, W; Sebening, H; Herrmann, G
1977-04-22
210 patients were followed up by the actuary method for over 5 years after isolated mitral valve replacement or a double valve replacement. After isolated valve replacement the one month survival including the operative mortality was 92+/-2%. The survival after one year was 83+/-3% and after 5 years 66+/-7%. The five year survival of patients in preoperative class III (according to the NYHA) was 73+/-8% and of class IV 57+/-8% (P less than or equal to 0.1). A comparison of valve replacements for pure mitral stenosis or mitral insufficiency showed no statistically significant differences. In the 37 patients who had a double valve replacement the survival risk was not increased in comparison with those patients who had had a single valve replacement. Age above 45 years and a preoperative markedly raised pulmonary arteriolar resistance reduced the chances of survival.
NASA Astrophysics Data System (ADS)
Fischer, M.; Noormets, A.; Domec, J. C.; Rosa, R.; Williamson, J.; Boone, J.; Sucre, E.; Trnka, M.; King, J.
2015-12-01
Intercropping bioenergy grasses within traditional pine silvicultural systems provides an opportunity for economic diversification and regional bioenergy production in a way that complements existing land use systems. Bioenergy intercropping in pine plantations does not compete with food production for land and it is thought will increase ecosystem resource-use efficiencies. As the frequency and intensity of drought is expected to increase with the changing climate, maximizing water use-efficiency of intercropped bioenergy systems will become increasingly important for long-term economic and environmental sustainability. The presented study is focused on evapotranspiration (ET) of an experimental pine-switchgrass intercropping system in the Lower Coastal Plain of North Carolina. We measured ET of two pure switchgrass fields, two pure pine stands and two pine-switchgrass intercropping systems using combined surface renewal (SR) and energy balance (EB) method throughout 2015. SR is based on high-frequency measurement of air temperature at or above canopy. As previously demonstrated, temperature time series are associated with identifiable, repeated patterns called "turbulent coherent structures". These coherent structures are considered to be responsible for most of the turbulent transport. Statistical analysis of the coherent structures in temperature time series allows quantification of sensible heat flux density (H) from the investigated area. Information about H can be combined with measurement of net radiation and soil heat flux density to indirectly obtain ET estimates as a residual of the energy balance equation. Despite the recent progress in the SR method, there is no standard methodology and each method available includes assumptions which require more research. To validate our SR estimates of ET, we used an eddy covariance (EC) system placed temporarily next to the each SR station as a comparative measurement of H. The conference contribution will include: i) evaluation of SR method compared to EC; ii) comparison of different SR calculation procedures including application of various thermocouples sizes and measurement heights; iii) quantification of ET of the three investigated ecosystems; iv) analysis of ET diurnal and seasonal variation with respect to weather conditions.
Regression relation for pure quantum states and its implications for efficient computing.
Elsayed, Tarek A; Fine, Boris V
2013-02-15
We obtain a modified version of the Onsager regression relation for the expectation values of quantum-mechanical operators in pure quantum states of isolated many-body quantum systems. We use the insights gained from this relation to show that high-temperature time correlation functions in many-body quantum systems can be controllably computed without complete diagonalization of the Hamiltonians, using instead the direct integration of the Schrödinger equation for randomly sampled pure states. This method is also applicable to quantum quenches and other situations describable by time-dependent many-body Hamiltonians. The method implies exponential reduction of the computer memory requirement in comparison with the complete diagonalization. We illustrate the method by numerically computing infinite-temperature correlation functions for translationally invariant Heisenberg chains of up to 29 spins 1/2. Thereby, we also test the spin diffusion hypothesis and find it in a satisfactory agreement with the numerical results. Both the derivation of the modified regression relation and the justification of the computational method are based on the notion of quantum typicality.
Lin, Hongli; Yang, Xuedong; Wang, Weisheng
2014-08-01
Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.
Penninx, Brenda W J H; Nolen, Willem A; Lamers, Femke; Zitman, Frans G; Smit, Johannes H; Spinhoven, Philip; Cuijpers, Pim; de Jong, Peter J; van Marwijk, Harm W J; van der Meer, Klaas; Verhaak, Peter; Laurant, Miranda G H; de Graaf, Ron; Hoogendijk, Witte J; van der Wee, Nic; Ormel, Johan; van Dyck, Richard; Beekman, Aartjan T F
2011-09-01
Whether course trajectories of depressive and anxiety disorders are different, remains an important question for clinical practice and informs future psychiatric nosology. This longitudinal study compares depressive and anxiety disorders in terms of diagnostic and symptom course trajectories, and examines clinical prognostic factors. Data are from 1209 depressive and/or anxiety patients residing in primary and specialized care settings, participating in the Netherlands Study of Depression and Anxiety. Diagnostic and Life Chart Interviews provided 2-year course information. Course was more favorable for pure depression (n=267, median episode duration = 6 months, 24.5% chronic) than for pure anxiety (n=487, median duration = 16 months, 41.9% chronic). Worst course was observed in the comorbid depression-anxiety group (n=455, median duration > 24 months, 56.8% chronic). Independent predictors of poor diagnostic and symptom trajectory outcomes were severity and duration of index episode, comorbid depression-anxiety, earlier onset age and older age. With only these factors a reasonable discriminative ability (C-statistic 0.72-0.77) was reached in predicting 2-year prognosis. Depression and anxiety cases concern prevalent - not incident - cases. This, however, reflects the actual patient population in primary and specialized care settings. Their differential course trajectory justifies separate consideration of pure depression, pure anxiety and comorbid anxiety-depression in clinical practice and psychiatric nosology. Copyright © 2011 Elsevier B.V. All rights reserved.
Growth and characterization of pure and glycine doped cadmium thiourea sulphate (GCTS) crystals
NASA Astrophysics Data System (ADS)
Lawrence, M.; Thomas Joseph Prakash, J.
2012-06-01
The pure and glycine doped cadmium thiourea sulphate (GCTS) single crystals were grown successfully by slow evaporation method at room temperature. The concentration of dopant in the mother solution was 1 mol%. There is a change in unit cell. The Fourier transform infrared spectroscopy study confirms the incorporation of glycine into CTS crystal. The doped crystals are optically better and more transparent than the pure ones. The dopant increases the hardness value of the material. The grown crystals were also subjected to thermal and NLO studies.
On the detection of other planetary systems by astrometric techniques
NASA Technical Reports Server (NTRS)
Black, D. C.; Scargle, J. D.
1982-01-01
A quantitative method for astrometrically detecting perturbations induced in a star's motion by the presence of a planetary object is described. A periodogram is defined, wherein signals observed from a star show exactly periodic variations, which can be extracted from observational data using purely statistical methods. A detection threshold is defined for the frequency of occurrence of some detectable signal, e.g., the Nyquist frequency. Possible effects of a stellar orbital eccentricity and multiple companions are discussed, noting that assumption of a circular orbit assures the spectral purity of the signal described. The periodogram technique was applied to 12 yr of astrometric data from the U.S. Naval Observatory for three stars with low mass stellar companions. Periodic perturbations were confirmed. A comparison of the accuracy of different astrometric systems shows that the detection accuracy of a system is determined by the measurement accuracy and the number of observations, although the detection efficiency can be maximized by minimizing the number of data points for the case when observational errors are proportional to the square root of the number of data points. It is suggested that a space-based astrometric telescope is best suited to take advantage of the method.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Impact of risk attitudes and perception on game theoretic driving interactions and safety.
Arbis, David; Dixit, Vinayak V; Rashidi, Taha Hossein
2016-09-01
This study employs game theory to investigate behavioural norms of interaction between drivers at a signalised intersection. The choice framework incorporates drivers' risk perception as well as their risk attitudes. A laboratory experiment is conducted to study the impact of risk attitudes and perception in crossing behaviour at a signalised intersection. The laboratory experiment uses methods from experimental economics to induce incentives and study revealed behaviour. Conflicting drivers are considered to have symmetric disincentives for crashing, to represent a no-fault car insurance environment. The study is novel as it uses experimental data collection methods to investigate perceived risk. Further, it directly integrates perceived risk of crashing with other active drivers into the modelling structure. A theoretical model of intersection crossing behaviour is also developed in this paper. This study shows that right-of-way entitlements assigned without authoritative penalties to at-fault drivers may still improve perceptions of safety. Further, risk aversion amongst drivers attributes to manoeuvring strategies at or below Nash mixed strategy equilibrium. These findings offer a theoretical explanation for interactive manoeuvres that lead to crashes, as opposed to purely statistical methods which provide correlation but not necessarily explanation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Homogenising time series: beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2011-06-01
In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.
USDA-ARS?s Scientific Manuscript database
The objective of this research was to evaluate and develop a method for inactivation of Salmonella enterica and Listeria monocytogenes in cantaloupe puree (CP) by high hydrostatic pressure (HHP). Cantaloupe being the most netted varieties of melons presents a greater risk of pathogen transmission. ...
Experimental and theoretical study of pure and doped crystals: Gd2O2S, Gd2O2S:Eu3+ and Gd2O2S:Tb3+
NASA Astrophysics Data System (ADS)
Wang, Fei; Chen, Xiumin; Liu, Dachun; Yang, Bin; Dai, Yongnian
2012-08-01
Quantum chemistry and experimental method were used to study on pure and doped Gd2O2S crystals in this paper. The band structure and DOS diagrams of pure and doped Gd2O2S crystals which calculated by using DFT (Density Functional Theory) method were illustrated to explain the luminescent properties of impurities in crystals. The calculations of the crystal structure were finished by using the program of CASTEP (Cambridge Sequential Total Energy Package). The samples showed the characteristic emissions of Tb3+ ions with 5D4-7FJ transitions and Eu3+ ions with 5D0-7FJ transitions which emit pure green luminescence and red luminescence respectively. The experimental excitation spectra of Tb3+ and Eu3+ doped Gd2O2S are in agreement of the DOS diagrams over the explored energy range, which has allowed a better understanding of different luminescence mechanisms of Tb3+ and Eu3+ in Gd2O2S crystals.
Yang, Yu; Strickland, Zackary; Kapalavavi, Brahmam; Marple, Ronita; Gamsky, Chris
2011-03-15
In this work, chromatographic separation of niacin and niacinamide using pure water as the sole component in the mobile phase has been investigated. The separation and analysis of niacinamide have been optimized using three columns at different temperatures and various flow rates. Our results clearly demonstrate that separation and analysis of niacinamide from skincare products can be achieved using pure water as the eluent at 60°C on a Waters XTerra MS C18 column, a Waters XBridge C18 column, or at 80°C on a Hamilton PRP-1 column. The separation efficiency, quantification quality, and analysis time of this new method are at least comparable with those of the traditional HPLC methods. Compared with traditional HPLC, the major advantage of this newly developed green chromatography technique is the elimination of organic solvents required in the HPLC mobile phase. In addition, the pure water chromatography separations described in this work can be directly applied in industrial plant settings without further modification of the existing HPLC equipment. Copyright © 2011 Elsevier B.V. All rights reserved.
Shang, Longan; Jiang, Min; Ryu, Chul Hee; Chang, Ho Nam; Cho, Soon Haeng; Lee, Jong Won
2003-08-05
In order to see the effect of CO(2) inhibition resulting from the use of pure oxygen, we carried out a comparative fed-batch culture study of polyhydroxybutyric acid (PHB) production by Ralstonia eutropha using air and pure oxygen in 5-L, 30-L, and 300-L fermentors. The final PHB concentrations obtained with pure O(2) were 138.7 g/L in the 5-L fermentor and 131.3 g/L in the 30-L fermentor, which increased 2.9 and 6.2 times, respectively, as compared to those obtained with air. In the 300-L fermentor, the fed-batch culture with air yielded only 8.4 g/L PHB. However, the maximal CO(2) concentrations in the 5-L fermentor increased significantly from 4.1% (air) to 15.0% (pure O(2)), while it was only 1.6% in the 30-L fermentor with air, but reached 14.2% in the case of pure O(2). We used two different experimental methods for evaluating CO(2) inhibition: CO(2) pulse injection and autogenous CO(2) methods. A 10 or 22% (v/v) CO(2) pulse with a duration of 3 or 6 h was introduced in a pure-oxygen culture of R. eutropha to investigate how CO(2) affects the synthesis of biomass and PHB. CO(2) inhibited the cell growth and PHB synthesis significantly. The inhibitory effect became stronger with the increase of the CO(2) concentration and pulse duration. The new proposed autogenous CO(2) method makes it possible to place microbial cells under different CO(2) level environments by varying the gas flow rate. Introduction of O(2) gas at a low flow rate of 0.42 vvm resulted in an increase of CO(2) concentration to 30.2% in the exit gas. The final PHB of 97.2 g/L was obtained, which corresponded to 70% of the PHB production at 1.0 vvm O(2) flow rate. This new method measures the inhibitory effect of CO(2) produced autogenously by cells through the entire fermentation process and can avoid the overestimation of CO(2) inhibition without introducing artificial CO(2) into the fermentor. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 83: 312-320, 2003.
2015-01-01
Chemoenzymatic dynamic kinetic resolution (DKR) constitutes a convenient and efficient method to access enantiomerically pure alcohol and amine derivatives. This Perspective highlights the work carried out within this field during the past two decades and pinpoints important avenues for future research. First, the Perspective will summarize the more developed area of alcohol DKR, by delineating the way from the earliest proof-of-concept protocols to the current state-of-the-art systems that allows for the highly efficient and selective preparation of a wide range of enantiomerically pure alcohol derivatives. Thereafter, the Perspective will focus on the more challenging DKR of amines, by presenting the currently available homogeneous and heterogeneous methods and their respective limitations. In these two parts, significant attention will be dedicated to the design of efficient racemization methods as an important means of developing milder DKR protocols. In the final part of the Perspective, a brief overview of the research that has been devoted toward improving enzymes as biocatalysts is presented. PMID:25730714
Bharath, Nagaraj; Sowmya, Nagur Karibasappa; Mehta, Dhoom Singh
2015-01-01
Background: The aim of this study was to evaluate the antibacterial activity of pure green coffee bean extract on periodonto pathogenic bacteria Porphyromonas gingivalis (Pg), Prevotella intermedia (Pi), Fusobacterium nucleatum (Fn) and Aggregatibacter actinomycetemcomitans (Aa). Materials and Methods: Minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBC) were used to assess the antibacterial effect of pure green coffee bean extract against periodonto pathogenic bacteria by micro dilution method and culture method, respectively. Results: MIC values of Pg, Pi and Aa were 0.2 μg/ml whereas Fn showed sensitive at concentration of 3.125 μg/ml. MBC values mirrors the values same as that of MIC. Conclusion: Antimicrobial activity of pure green coffee bean extract against Pg, Pi, Fn and Aa suggests that it could be recommended as an adjunct to mechanical therapy in the management of periodontal disease. PMID:26097349
NASA Astrophysics Data System (ADS)
Latha, B.; Kumaresan, P.; Nithiyanantham, S.; Sampathkumar, K.
2017-08-01
In the present examination, a methodical study has been done on the development of unadulterated and Coumarin doped Tetrafluoro Phthalate precious stones. Powder X-beam diffraction studies were done and the cross section parameters were computed by minimum square technique in pure and doped crystals. FT-IR, UV-Vis, Thermal, Micro-hardness and Dielectric studies were additionally done for the pure and doped crystals. The tentatively watched FT-IR and FT-Raman groups were allotted to various ordinary methods of the atom. The steadiness and charge delocalization of the particle were likewise concentrations were done by characteristic security orbital (NBO) examination. The HOMO-LUMO energies depict the charge exchange happens inside the particle. Atomic electrostatic potential has been broken down the electronic properties such as excitation energies, oscillator quality, wavelengths and HOMO-LUMO energies were acquired by time-subordinate DFT (TD-DFT) approach. The SHG of pure and doped TFP stones were examined through Nd:YAG Q-exchanged laser.
2D dark-count-rate modeling of PureB single-photon avalanche diodes in a TCAD environment
NASA Astrophysics Data System (ADS)
Knežević, Tihomir; Nanver, Lis K.; Suligoj, Tomislav
2018-02-01
PureB silicon photodiodes have nm-shallow p+n junctions with which photons/electrons with penetration-depths of a few nanometer can be detected. PureB Single-Photon Avalanche Diodes (SPADs) were fabricated and analysed by 2D numerical modeling as an extension to TCAD software. The very shallow p+ -anode has high perimeter curvature that enhances the electric field. In SPADs, noise is quantified by the dark count rate (DCR) that is a measure for the number of false counts triggered by unwanted processes in the non-illuminated device. Just like for desired events, the probability a dark count increases with increasing electric field and the perimeter conditions are critical. In this work, the DCR was studied by two 2D methods of analysis: the "quasi-2D" (Q-2D) method where vertical 1D cross-sections were assumed for calculating the electron/hole avalanche-probabilities, and the "ionization-integral 2D" (II-2D) method where crosssections were placed where the maximum ionization-integrals were calculated. The Q-2D method gave satisfactory results in structures where the peripheral regions had a small contribution to the DCR, such as in devices with conventional deepjunction guard rings (GRs). Otherwise, the II-2D method proved to be much more precise. The results show that the DCR simulation methods are useful for optimizing the compromise between fill-factor and p-/n-doping profile design in SPAD devices. For the experimentally investigated PureB SPADs, excellent agreement of the measured and simulated DCR was achieved. This shows that although an implicit GR is attractively compact, the very shallow pn-junction gives a risk of having such a low breakdown voltage at the perimeter that the DCR of the device may be negatively impacted.
Koike, Mari; Hummel, Susan K; Ball, John D; Okabe, Toru
2012-06-01
Although pure titanium is known to have good biocompatibility, a titanium alloy with better strength is needed for fabricating clinically acceptable, partial removable dental prosthesis (RDP) frameworks. The mechanical properties of an experimental Ti-5Al-5Cu alloy cast with a 2-step investment technique were examined for RDP framework applications. Patterns for tests for various properties and denture frameworks for a preliminary trial casting were invested with a 2-step coating method using 2 types of mold materials: a less reactive spinel compound (Al(2)O(3)·MgO) and a less expensive SiO(2)-based material. The yield and tensile strength (n=5), modulus of elasticity (n=5), elongation (n=5), and hardness (n=8) of the cast Ti-5Al-5Cu alloy were determined. The external appearance and internal porosities of the preliminary trial castings of denture frameworks (n=2) were examined with a conventional dental radiographic unit. Cast Ti-6Al-4V alloy and commercially pure titanium (CP Ti) were used as controls. The data for the mechanical properties were statistically analyzed with 1-way ANOVA (α=.05). The yield strength of the cast Ti-5Al-5Cu alloy was 851 MPa and the hardness was 356 HV. These properties were comparable to those of the cast Ti-6Al-4V and were higher than those of CP Ti (P<.05). One of the acrylic resin-retention areas of the Ti-5Al-5Cu frameworks was found to have been incompletely cast. The cast biocompatible experimental Ti-5Al-5Cu alloy exhibited high strength when cast with a 2-step coating method. With a dedicated study to determine the effect of sprue design on the quality of castings, biocompatible Ti-5Al-5Cu RDP frameworks for a clinical trial can be produced. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Method for rapid, controllable growth and thickness, of epitaxial silicon films
Wang, Qi [Littleton, CO; Stradins, Paul [Golden, CO; Teplin, Charles [Boulder, CO; Branz, Howard M [Boulder, CO
2009-10-13
A method of producing epitaxial silicon films on a c-Si wafer substrate using hot wire chemical vapor deposition by controlling the rate of silicon deposition in a temperature range that spans the transition from a monohydride to a hydrogen free silicon surface in a vacuum, to obtain phase-pure epitaxial silicon film of increased thickness is disclosed. The method includes placing a c-Si substrate in a HWCVD reactor chamber. The method also includes supplying a gas containing silicon at a sufficient rate into the reaction chamber to interact with the substrate to deposit a layer containing silicon thereon at a predefined growth rate to obtain phase-pure epitaxial silicon film of increased thickness.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.; Smith, Richard N.; Marsh, Steven P.; Kuklinski, Robert
A key element of mushy zone modeling is the description of the microscopic evolution of the lengthscales within the mushy zone and the influence of macroscopic transport processes. This paper describes some recent progress in developing a mean-field statistical theory of phase coarsening in adiabatic mushy zones. The main theoretical predictions are temporal scaling laws that indicate that average lengthscale increases as time 1/3, a self-similar distribution of mushy zone lengthscales based on spherical solid particle shapes, and kinetic rate constants which provide the dependences of the coarsening process on material parameters and the volume fraction of the solid phase. High precision thermal decay experiments are described which verify aspects of the theory in pure material mushy zones held under adiabatic conditions. The microscopic coarsening theory is then integrated within a macroscopic heat transfer model of one-dimensional alloy solidification, using the Double Integral Method. The method demonstrates an ability to predict the influence of macroscopic heat transfer on the evolution of primary and secondary dendrite arm spacings in Al-Cu alloys. Finally, some suggestions are made for future experimental and theoretical studies required in developing comprehensive solidification processing models.
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
NASA Astrophysics Data System (ADS)
Cornaglia, Matteo; Mouchiroud, Laurent; Marette, Alexis; Narasimhan, Shreya; Lehnert, Thomas; Jovaisaite, Virginija; Auwerx, Johan; Gijs, Martin A. M.
2015-05-01
Studies of the real-time dynamics of embryonic development require a gentle embryo handling method, the possibility of long-term live imaging during the complete embryogenesis, as well as of parallelization providing a population’s statistics, while keeping single embryo resolution. We describe an automated approach that fully accomplishes these requirements for embryos of Caenorhabditis elegans, one of the most employed model organisms in biomedical research. We developed a microfluidic platform which makes use of pure passive hydrodynamics to run on-chip worm cultures, from which we obtain synchronized embryo populations, and to immobilize these embryos in incubator microarrays for long-term high-resolution optical imaging. We successfully employ our platform to investigate morphogenesis and mitochondrial biogenesis during the full embryonic development and elucidate the role of the mitochondrial unfolded protein response (UPRmt) within C. elegans embryogenesis. Our method can be generally used for protein expression and developmental studies at the embryonic level, but can also provide clues to understand the aging process and age-related diseases in particular.
Karbasi, Saeed; Khorasani, Saied Nouri; Ebrahimi, Somayeh; Khalili, Shahla; Fekrat, Farnoosh; Sadeghi, Davoud
2016-01-01
Background: Poly (hydroxy butyrate) (PHB) is a biodegradable and biocompatible polymer with good mechanical properties. This polymer could be a promising material for scaffolds if some features improve. Materials and Methods: In the present work, new PHB/chitosan blend scaffolds were prepared as a three-dimensional substrate in cartilage tissue engineering. Chitosan in different weight percent was added to PHB and solved in trifluoroacetic acid. Statistical Taguchi method was employed in the design of experiments. Results: The Fourier-transform infrared spectroscopy test revealed that the crystallization of PHB in these blends is suppressed with increasing the amount of chitosan. Scanning electron microscopy images showed a thin and rough top layer with a nodular structure, supported with a porous sub-layer in the surface of the scaffolds. In vitro degradation rate of the scaffolds was higher than pure PHB scaffolds. Maximum degradation rate has been seen for the scaffold with 90% wt. NaCl and 40% wt. chitosan. Conclusions: The obtained results suggest that these newly developed PHB/chitosan blend scaffolds may serve as a three-dimensional substrate in cartilage tissue engineering. PMID:28028517
NASA Astrophysics Data System (ADS)
Bisen, Supriya; Mishra, Ashutosh; Jarabana, Kanaka M.
2016-05-01
In this work, Barium Titanate (BaTiO3) powders were synthesized via Sol-Gel auto combustion method using citric acid as a chelating agent. We study the behavior of ferroelectric and dielectric properties of pure and doped BaTiO3 on different concentration. To understand the phase and structure of the powder calcined at 900°C were characterized by X-ray Diffraction shows that tetragonal phase is dominant for pure and doped BTO and data fitted by Rietveld Refinement. Electric and Dielectric properties were characterized by P-E Hysteresis and Dielectric measurement. In P-E measurement ferroelectric loop tracer applied for different voltage. The temperature dependant dielectric constant behavior was observed as a function of frequency recorded on hp-Hewlett Packard 4192A, LF impedance, 5Hz-13Hz analyzer.
Automated frequency analysis of synchronous and diffuse sleep spindles.
Huupponen, Eero; Saastamoinen, Antti; Niemi, Jukka; Virkkala, Jussi; Hasan, Joel; Värri, Alpo; Himanen, Sari-Leena
2005-01-01
Sleep spindles have different properties in different localizations in the cortex. First main objective was to develop an amplitude-independent multi-channel spindle detection method. Secondly the method was applied to study the anteroposterior frequency differences of pure synchronous (visible bilaterally, either frontopolarly or centrally) and diffuse (visible bilaterally both frontopolarly and centrally) sleep spindles. A previously presented spindle detector based on the fuzzy reasoning principle and a level detector were combined to form a multi-channel spindle detector. The spindle detector had a 76.17% true positive rate and 0.93% false-positive rate. Pure central spindles were faster and pure frontal spindles were slower than diffuse spindles measured simultaneously from both locations. The study of frequency relations of spindles might give new information about thalamocortical sleep spindle generating mechanisms. Copyright (c) 2005 S. Karger AG, Basel.
Pujeri, Sudhakar S.; Khader, Addagadde M. A.; Seetharamappa, Jaldappagari
2012-01-01
A simple, rapid and stability-indicating reversed-phase liquid chromatographic method was developed for the assay of varenicline tartrate (VRT) in the presence of its degradation products generated from forced decomposition studies. The HPLC separation was achieved on a C18 Inertsil column (250 mm × 4.6 mm i.d. particle size is 5 μm) employing a mobile phase consisting of ammonium acetate buffer containing trifluoroacetic acid (0.02M; pH 4) and acetonitrile in gradient program mode with a flow rate of 1.0 mL min−1. The UV detector was operated at 237 nm while column temperature was maintained at 40 °C. The developed method was validated as per ICH guidelines with respect to specificity, linearity, precision, accuracy, robustness and limit of quantification. The method was found to be simple, specific, precise and accurate. Selectivity of the proposed method was validated by subjecting the stock solution of VRT to acidic, basic, photolysis, oxidative and thermal degradation. The calibration curve was found to be linear in the concentration range of 0.1–192 μg mL−1 (R2 = 0.9994). The peaks of degradation products did not interfere with that of pure VRT. The utility of the developed method was examined by analyzing the tablets containing VRT. The results of analysis were subjected to statistical analysis. PMID:22396908
Abong', George Ooko
2018-01-01
Limited information exists on the status of hygiene and probable sources of microbial contamination in Orange Fleshed Sweet Potato (OFSP) puree processing. The current study is aimed at determining the level of compliance to Good Manufacturing Practices (GMPs), hygiene, and microbial quality in OFSP puree processing plant in Kenya. Intensive observation and interviews using a structured GMPs checklist, environmental sampling, and microbial analysis by standard microbiological methods were used in data collection. The results indicated low level of compliance to GMPs with an overall compliance score of 58%. Microbial counts on food equipment surfaces, installations, and personnel hands and in packaged OFSP puree were above the recommended microbial safety and quality legal limits. Steaming significantly (P < 0.05) reduced microbial load in OFSP cooked roots but the counts significantly (P < 0.05) increased in the puree due to postprocessing contamination. Total counts, yeasts and molds, Enterobacteriaceae, total coliforms, and E. coli and S. aureus counts in OFSP puree were 8.0, 4.0, 6.6, 5.8, 4.8, and 5.9 log10 cfu/g, respectively. In conclusion, equipment surfaces, personnel hands, and processing water were major sources of contamination in OFSP puree processing and handling. Plant hygiene inspection, environmental monitoring, and food safety trainings are recommended to improve hygiene, microbial quality, and safety of OFSP puree. PMID:29808161
Malavi, Derick Nyabera; Muzhingi, Tawanda; Abong', George Ooko
2018-01-01
Limited information exists on the status of hygiene and probable sources of microbial contamination in Orange Fleshed Sweet Potato (OFSP) puree processing. The current study is aimed at determining the level of compliance to Good Manufacturing Practices (GMPs), hygiene, and microbial quality in OFSP puree processing plant in Kenya. Intensive observation and interviews using a structured GMPs checklist, environmental sampling, and microbial analysis by standard microbiological methods were used in data collection. The results indicated low level of compliance to GMPs with an overall compliance score of 58%. Microbial counts on food equipment surfaces, installations, and personnel hands and in packaged OFSP puree were above the recommended microbial safety and quality legal limits. Steaming significantly ( P < 0.05) reduced microbial load in OFSP cooked roots but the counts significantly ( P < 0.05) increased in the puree due to postprocessing contamination. Total counts, yeasts and molds, Enterobacteriaceae, total coliforms, and E. coli and S. aureus counts in OFSP puree were 8.0, 4.0, 6.6, 5.8, 4.8, and 5.9 log 10 cfu/g, respectively. In conclusion, equipment surfaces, personnel hands, and processing water were major sources of contamination in OFSP puree processing and handling. Plant hygiene inspection, environmental monitoring, and food safety trainings are recommended to improve hygiene, microbial quality, and safety of OFSP puree.
Eapen, Valsamma; Robertson, Mary M
2015-01-01
This study addressed several questions relating to the core features of Tourette syndrome (TS) including in particular coprolalia (involuntary utterance of obscene words) and copropraxia (involuntary and inappropriate rude gesturing). A cohort of 400 TS patients was investigated. We observed that coprolalia occurred in 39% of the full cohort of 400 patients and copropraxia occurred in 20% of the cohort. Those with coprolalia had significantly higher Yale Global Tic Severity Scale (YGTSS) and Diagnostic Confidence Index (DCI) total scores and a significantly higher proportion also experienced copropraxia and echolalia. A subgroup of 222 TS patients with full comorbidity data available were also compared based on whether they had pure-TS (motor and vocal tics only) or associated comorbidities and co-existent psychopathologies (TS-plus). Pure-TS and TS-plus groups were compared across a number of characteristics including TS severity, associated clinical features, and family history. In this subgroup, 13.5% had pure-TS, while the remainder had comorbidities and psychopathologies consistent with TS-plus. Thirty-nine percent of the TS-plus group displayed coprolalia, compared to (0%) of the pure-TS group and the difference in proportions was statistically significant. The only other significant difference found between the two groups was that pure-TS was associated with no family history of obsessive compulsive disorder which is an interesting finding that may suggest that additional genes or environmental factors may be at play when TS is associated with comorbidities. Finally, differences between individuals with simple versus complex vocal/motor tics were evaluated. Results indicated that individuals with complex motor/vocal tics were significantly more likely to report premonitory urges/sensations than individuals with simple tics and TS. The implications of these findings for the assessment and understanding of TS are discussed. PMID:26089672
The Taguchi Method Application to Improve the Quality of a Sustainable Process
NASA Astrophysics Data System (ADS)
Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.
2018-06-01
Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.
Methods for producing monodispersed particles of barium titanate
Hu, Zhong-Cheng
2001-01-01
The present invention is a low-temperature controlled method for producing high-quality, ultrafine monodispersed nanocrystalline microsphere powders of barium titanate and other pure or composite oxide materials having particles ranging from nanosized to micronsized particles. The method of the subject invention comprises a two-stage process. The first stage produces high quality monodispersed hydrous titania microsphere particles prepared by homogeneous precipitation via dielectric tuning in alcohol-water mixed solutions of inorganic salts. Titanium tetrachloride is used as an inorganic salt precursor material. The second stage converts the pure hydrous titania microsphere particles into crystalline barium titanate microsphere powders via low-temperature, hydrothermal reactions.
Narayan, Reema; Pednekar, Abhyuday; Bhuyan, Dipshikha; Gowda, Chaitra; Koteshwara, K B; Nayak, Usha Yogendra
2017-01-01
The aim of the present work was to tackle the solubility issue of a biopharmaceutics classification system (BCS)-II drug, aceclofenac. Although a number of attempts to increase the aqueous solubility have been made, none of the methods were taken up for scale-up. Hence size reduction technique by a top-down approach using wet milling process was utilized to improve the solubility and, consequently, the dissolution velocity of aceclofenac. The quality of the final product was ensured by Quality by Design approach wherein the effects of critical material attributes and critical process parameters were assessed on the critical quality attributes (CQAs) of nanocrystals. Box-Behnken design was applied to evaluate these effects on critical quality attributes. The optimized nanocrystals had a particle size of 484.7±54.12 nm with a polydispersity index (PDI) of 0.108±0.009. The solid state characterization of the formulation revealed that the crystalline nature of the drug was slightly reduced after the milling process. With the reduced particle size, the solubility of the nanocrystals was found to increase in both water and 0.1 N HCl when compared with that of unmilled pure aceclofenac. These results were further supported by in vitro release studies of nanocrystals where an appreciable dissolution velocity with 100.07%±2.38% release was observed for aceclofenac nanocrystals compared with 47.66%±4.53% release for pure unmilled aceclofenac at the end of 2 h. The in vivo pharmacokinetic data generated showed a statistically significant increase in the C max for aceclofenac nanocrystals of 3.75±0.28 µg/mL (for pure unmilled aceclofenac C max was 1.96±0.17 µg/mL). The results obtained indicated that the developed nanocrystals of aceclofenac were successful in improving the solubility, thus the absorption and bioavailability of the drug. Hence, it may be a viable and cost-effective alternative to the current therapy.
Eldridge, Joshua A.; Repko, Debra
2014-01-01
Abstract The purpose of these studies was to determine if a Büchi Mini Spray Dryer B-290 (Büchi Corporation, New Castle, DE, USA) could be used to prepare blackberry extract powders containing mannitol as a thermoprotectant without extensively degrading anthocyanins and polyphenols in the resulting powders. Three blackberry puree extract samples were each prepared by sonication of puree in 30/70% ethanol/water containing 0.003% HCl. Blackberry puree extract sample 1 (S1) contained no mannitol, while blackberry puree extract sample 2 (S2) contained 3.0:1 (w/w) mannitol:berry extract, and blackberry puree extract sample 3 (S3) contained 6.3:1 (w/w) mannitol:berry extract. The levels of anthocyanins and polyphenols in reconstituted spray-dried powders produced from S1–S3 were compared to solutions of S1–S3 that were held at 4°C as controls. All extract samples could be spray-dried using the Büchi Mini Spray Dryer B-290. S1, with no mannitol, showed a 30.8% decrease in anthocyanins and a 24.1% decrease in polyphenols following spray-drying. However, S2 had a reduction in anthocyanins of only 13.8%, while polyphenols were reduced by only 6.1%. S3, with a ratio of mannitol to berry extract of 6.3:1, exhibited a 12.5% decrease in anthocyanins while the decrease in polyphenols after spray-drying was not statistically significant (P=.16). Collectively, these data indicate that a Büchi Mini Spray Dryer B-290 is a suitable platform for producing stable berry extract powders, and that mannitol is a suitable thermoprotectant that facilitates retention of thermosensitive polyphenolic species in berry extracts during spray-drying. PMID:24892214
Thermodynamics of ideal quantum gas with fractional statistics in D dimensions.
Potter, Geoffrey G; Müller, Gerhard; Karbach, Michael
2007-06-01
We present exact and explicit results for the thermodynamic properties (isochores, isotherms, isobars, response functions, velocity of sound) of a quantum gas in dimensions D > or = 1 and with fractional exclusion statistics 0 < or = g < or =1 connecting bosons (g=0) and fermions (g=1) . In D=1 the results are equivalent to those of the Calogero-Sutherland model. Emphasis is given to the crossover between bosonlike and fermionlike features, caused by aspects of the statistical interaction that mimic long-range attraction and short-range repulsion. A phase transition along the isobar occurs at a nonzero temperature in all dimensions. The T dependence of the velocity of sound is in simple relation to isochores and isobars. The effects of soft container walls are accounted for rigorously for the case of a pure power-law potential.
Entanglement Entropy of Eigenstates of Quantum Chaotic Hamiltonians.
Vidmar, Lev; Rigol, Marcos
2017-12-01
In quantum statistical mechanics, it is of fundamental interest to understand how close the bipartite entanglement entropy of eigenstates of quantum chaotic Hamiltonians is to maximal. For random pure states in the Hilbert space, the average entanglement entropy is known to be nearly maximal, with a deviation that is, at most, a constant. Here we prove that, in a system that is away from half filling and divided in two equal halves, an upper bound for the average entanglement entropy of random pure states with a fixed particle number and normally distributed real coefficients exhibits a deviation from the maximal value that grows with the square root of the volume of the system. Exact numerical results for highly excited eigenstates of a particle number conserving quantum chaotic model indicate that the bound is saturated with increasing system size.
Non-Born-Oppenheimer calculations of the pure vibrational spectrum of HeH+.
Pavanello, Michele; Bubin, Sergiy; Molski, Marcin; Adamowicz, Ludwik
2005-09-08
Very accurate calculations of the pure vibrational spectrum of the HeH(+) ion are reported. The method used does not assume the Born-Oppenheimer approximation, and the motion of both the electrons and the nuclei are treated on equal footing. In such an approach the vibrational motion cannot be decoupled from the motion of electrons, and thus the pure vibrational states are calculated as the states of the system with zero total angular momentum. The wave functions of the states are expanded in terms of explicitly correlated Gaussian basis functions multipled by even powers of the internuclear distance. The calculations yielded twelve bound states and corresponding eleven transition energies. Those are compared with the pure vibrational transition energies extracted from the experimental rovibrational spectrum.
Charge fluctuations in nanoscale capacitors.
Limmer, David T; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A; van Roij, René; Rotenberg, Benjamin
2013-09-06
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
Charge Fluctuations in Nanoscale Capacitors
NASA Astrophysics Data System (ADS)
Limmer, David T.; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A.; van Roij, René; Rotenberg, Benjamin
2013-09-01
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
Radiation effects on beta /10.6/ of pure and europium doped KCl
NASA Technical Reports Server (NTRS)
Grimes, H. H.; Maisel, J. E.; Hartford, R. H.
1975-01-01
Changes in the optical absorption coefficient as the result of X-ray and electron bombardment of pure monocrystalline and polycrystalline KCl and of divalent europium doped polycrystalline KCl were determined. A constant heat flow calorimetric method was used to measure the optical absorption coefficients. Both 300 kV X-ray irradiation and 2 MeV electron irradiation produced increases in the optical absorption coefficient at room temperature. X-ray irradiation produced more significant changes in pure monocrystalline KCl than equivalent amounts of electron irradiation. Electron irradiation of pure and Eu-doped polycrystalline KCl produced increases in the absorption by as much as a factor of 20 over untreated material. Bleaching of the electron-irradiated doped KCl with 649 millimicron light produced a further increase.
NASA Astrophysics Data System (ADS)
Upadhyay, Neelam; Jaiswal, Pranita; Jha, Shyam Narayan
2018-02-01
Pure ghee is superior to other fats and oils due to the presence of bioactive lipids and its rich flavor. Adulteration of ghee with cheaper fats and oils is a prevalent fraudulent practice. ATR-FTIR spectroscopy was coupled with chemometrics for the purpose of detection of presence of pig body fat in pure ghee. Pure mixed ghee was spiked with pig body fat @ 3, 4, 5, 10, 15% level. The spectra of pure (ghee and pig body fat) along with the spiked samples was taken in MIR from 4000 to 500 cm-1. Some wavenumber ranges were selected on the basis of differences in the spectra obtained. Separate clusters of the samples were obtained by employing principal component analysis at 5% level of significance on the selected wavenumber range. Probable class membership was predicted by applying SIMCA approach. Approximately, 90% of the samples classified into their respective class and pure ghee and pig body fat never misclassified themselves. The value of R2 was >0.99 for both calibration and validation sets using partial least square method. The study concluded that spiking of pig body fat in pure ghee can be detected even at a level of 3%.
NASA Astrophysics Data System (ADS)
Gupta, Jhalak; Ahmad, Arham S.
2018-05-01
The nanocrystallites of pure and Fe doped Nickel Oxide (NiO) were synthesized by the cost effective co-precipitation method using nickel nitrate as the initial precursor. The synthesized nickel oxide nanoparticles were characterized by X-Ray Diffraction (XRD), Photoluminiscence Spectroscopy (PL), LCR meter. The crystallite size of synthesized pure Nickel Oxide nanoparticles obtained by XRD using Debye Scherer's formula was found to be 21.8nm and the size decreases on increasing the dopant concentration. The optical properties were analyzed by PL and dielectric ones by using LCR meter.
Growth and characterization of pure and Cadmium chloride doped KDP Crystals grown by gel medium
NASA Astrophysics Data System (ADS)
Kalaivani, M. S.; Asaithambi, T.
2016-10-01
Crystal growth technology provides an important basis for many industrial branches. Crystals are the unrecognized pillars of modern technology. Without crystals, there is no electronic industry, no photonic industry, and no fiber optic communications. Single crystals play a major role and form the strongest base for the fast growing field of engineering, science and technology. Crystal growth is an interdisciplinary subject covering physics, chemistry, material science, chemical engineering, metallurgy, crystallography, mineralogy, etc. In past few decades, there has been a keen interest on crystal growth processes, particularly in view of the increasing demand of materials for technological applications. Optically good quality pure and metal doped KDP crystals have been grown by gel method at room temperature and their characterization have been studied. Gel method is a much uncomplicated method and can be utilized to synthesize crystals which are having low solubility. Potassium dihydrogen orthophosphate KH2PO4 (KDP) continues to be an interesting material both academically and industrially. KDP is a representative of hydrogen bonded materials which possess very good electro - optic and nonlinear optical properties in addition to interesting electrical properties. Due to this interesting properties, we made an attempt to grow pure and cadmium chloride doped KDP crystals in various concentrations (0.002, 0.004, 0.006, 0.008 and 0.010) using gel method. The grown crystals were collected after 20 days. We get crystals with good quality and shaped. The dc electrical conductivity (resistance, capacitance and dielectric constant) values were measured at frequencies in the range of 1 KHZ and 100 HZ of pure and cadmium chloride added crystal with a temperature range of 400C to 1300C using simple two probe setup with Q band digital LCR meter present in our lab. The electrical conductivity increases with increase of temperature. The dielectric constants of metal doped KDP crystals were slightly decreased compared to pure KDP crystals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Robert Y., E-mail: rx-tang@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com; Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca
Purpose: To develop a method to estimate the mean fractional volume of fat (ν{sup ¯}{sub fat}) within a region of interest (ROI) of a tissue sample for wide-angle x-ray scatter (WAXS) applications. A scatter signal from the ROI was obtained and use of ν{sup ¯}{sub fat} in a WAXS fat subtraction model provided a way to estimate the differential linear scattering coefficient μ{sub s} of the remaining fatless tissue. Methods: The efficacy of the method was tested using animal tissue from a local butcher shop. Formalin fixed samples, 5 mm in diameter 4 mm thick, were prepared. The two mainmore » tissue types were fat and meat (fibrous). Pure as well as composite samples consisting of a mixture of the two tissue types were analyzed. For the latter samples, ν{sub fat} for the tissue columns of interest were extracted from corresponding pixels in CCD digital x-ray images using a calibration curve. The means ν{sup ¯}{sub fat} were then calculated for use in a WAXS fat subtraction model. For the WAXS measurements, the samples were interrogated with a 2.7 mm diameter 50 kV beam and the 6° scattered photons were detected with a CdTe detector subtending a solid angle of 7.75 × 10{sup −5} sr. Using the scatter spectrum, an estimate of the incident spectrum, and a scatter model, μ{sub s} was determined for the tissue in the ROI. For the composite samples, a WAXS fat subtraction model was used to estimate the μ{sub s} of the fibrous tissue in the ROI. This signal was compared to μ{sub s} of fibrous tissue obtained using a pure fibrous sample. Results: For chicken and beef composites, ν{sup ¯}{sub fat}=0.33±0.05 and 0.32 ± 0.05, respectively. The subtractions of these fat components from the WAXS composite signals provided estimates of μ{sub s} for chicken and beef fibrous tissue. The differences between the estimates and μ{sub s} of fibrous obtained with a pure sample were calculated as a function of the momentum transfer x. A t-test showed that the mean of the differences did not vary from zero in a statistically significant way thereby validating the methods. Conclusions: The methodology to estimate ν{sup ¯}{sub fat} in a ROI of a tissue sample via CCD x-ray imaging was quantitatively accurate. The WAXS fat subtraction model allowed μ{sub s} of fibrous tissue to be obtained from a ROI which had some fat. The fat estimation method coupled with the WAXS models can be used to compare μ{sub s} coefficients of fibroglandular and cancerous breast tissue.« less
ERIC Educational Resources Information Center
Kalindi, Sylvia Chanda; McBride, Catherine; Tong, Xiuhong; Wong, Natalie Lok Lee; Chung, Kien Hoa Kevin; Lee, Chia-Ying
2015-01-01
To examine cognitive correlates of dyslexia in Chinese and reading difficulties in English as a foreign language, a total of 14 Chinese dyslexic children (DG), 16 poor readers of English (PE), and 17 poor readers of both Chinese and English (PB) were compared to a control sample (C) of 17 children, drawn from a statistically representative sample…
NASA Astrophysics Data System (ADS)
Stapp, Henry P.
2011-11-01
The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determined by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.
Measurements of multi-scalar mixing in a turbulent coaxial jet
NASA Astrophysics Data System (ADS)
Hewes, Alais; Mydlarski, Laurent
2017-11-01
There are relatively few studies of turbulent multi-scalar mixing, despite the occurrence of this phenomenon in common processes (e.g. chemically reacting flows, oceanic mixing). In the present work, we simultaneously measure the evolution of two passive scalars (temperature and helium concentration) and velocity in a coaxial jet. Such a flow is particularly relevant, as coaxial jets are regularly employed in applications of turbulent non-premixed combustion, which relies on multi-scalar mixing. The coaxial jet used in the current experiment is based on the work of Cai et al. (J. Fluid Mech., 2011), and consists of a vertically oriented central jet of helium and air, surrounded by an annular flow of (unheated) pure air, emanating into a slow co-flow of (pure) heated air. The simultaneous two-scalar and velocity measurements are made using a 3-wire hot-wire anemometry probe. The first two wires of this probe form an interference (or Way-Libby) probe, and measure velocity and concentration. The third wire, a hot-wire operating at a low overheat ratio, measures temperature. The 3-wire probe is used to obtain concurrent velocity, concentration, and temperature statistics to characterize the mixing process by way of single and multivariable/joint statistics. Supported by the Natural Sciences and Engineering Research Council of Canada (Grant 217184).
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
The influence of dopants on the nucleation of semiconductor nanocrystals from homogeneous solution.
Bryan, J Daniel; Schwartz, Dana A; Gamelin, Daniel R
2005-09-01
The influence of Co2+ ions on the homogeneous nucleation of ZnO is examined. Using electronic absorption spectroscopy as a dopant-specific in-situ spectroscopic probe, Co2+ ions are found to be quantitatively excluded from the ZnO critical nuclei but incorporated nearly statistically in the subsequent growth layers, resulting in crystallites with pure ZnO cores and Zn(1-x)Co(x)O shells. Strong inhibition of ZnO nucleation by Co2+ ions is also observed. These results are explained using the classical nucleation model. Statistical analysis of nucleation inhibition data allows estimation of the critical nucleus size as 25 +/- 4 Zn2+ ions. Bulk calorimetric data allow the activation barrier for ZnO nucleation containing a single Co2+ impurity to be estimated as 5.75 kcal/mol cluster greater than that of pure ZnO, corresponding to a 1.5 x 10(4)-fold reduction in the ZnO nucleation rate constant upon introduction of a single Co2+ impurity. These data and analysis offer a rare view into the role of composition in homogeneous nucleation processes, and specifically address recent experiments targeting formation of semiconductor quantum dots containing single magnetic impurity ions at their precise centers.
A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals
NASA Technical Reports Server (NTRS)
Jones, Michael G.
1998-01-01
The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.
Renuka, N; Ramesh Babu, R; Vijayan, N; Vasanthakumar, Geetha; Krishna, Anuj; Ramamurthi, K
2015-02-25
In the present work, pure and metal substituted L-Prolinium trichloroacetate (LPTCA) single crystals were grown by slow evaporation method. The grown crystals were subjected to single crystal X-ray diffraction (XRD), powder X-ray diffraction, FTIR, UV-Visible-NIR, hardness, photoluminescence and dielectric studies. The dopant concentration in the crystals was measured by inductively coupled plasma (ICP) analysis. Single crystal X-ray diffraction studies of the pure and metal substituted LPTCA revealed that the grown crystals belong to the trigonal system. Ni(2+) and Co(2+) doping slightly altered the lattice parameters of LPTCA without affecting the basic structure of the crystal. FTIR spectral analysis confirms the presence of various functional groups in the grown crystals. The mechanical behavior of pure and doped crystals was analyzed by Vickers's microhardness test. The optical transmittance, dielectric and photoluminescence properties of the pure and doped crystals were analyzed. Copyright © 2014 Elsevier B.V. All rights reserved.
Ciocca, L.; Donati, D.; Ragazzini, S.; Dozza, B.; Rossi, F.; Fantini, M.; Spadari, A.; Romagnoli, N.; Landi, E.; Tampieri, A.; Piattelli, A.; Iezzi, G.; Scotti, R.
2013-01-01
Purpose. This study evaluated the efficacy of a regenerative approach using mesenchymal stem cells (MSCs) and CAD-CAM customized pure and porous hydroxyapatite (HA) scaffolds to replace the temporomandibular joint (TMJ) condyle. Methods. Pure HA scaffolds with a 70% total porosity volume were prototyped using CAD-CAM technology to replace the two temporomandibular condyles (left and right) of the same animal. MSCs were derived from the aspirated iliac crest bone marrow, and platelets were obtained from the venous blood of the sheep. Custom-made surgical guides were created by direct metal laser sintering and were used to export the virtual planning of the bone cut lines into the surgical environment. Sheep were sacrificed 4 months postoperatively. The HA scaffolds were explanted, histological specimens were prepared, and histomorphometric analysis was performed. Results. Analysis of the porosity reduction for apposition of newly formed bone showed a statistically significant difference in bone formation between condyles loaded with MSC and condyles without (P < 0.05). The bone ingrowth (BI) relative values of split-mouth comparison (right versus left side) showed a significant difference between condyles with and without MSCs (P < 0.05). Analysis of the test and control sides in the same animal using a split-mouth study design was performed; the condyle with MSCs showed greater bone formation. Conclusion. The split-mouth design confirmed an increment of bone regeneration into the HA scaffold of up to 797% upon application of MSCs. PMID:24073409
Detecting measurement outliers: remeasure efficiently
NASA Astrophysics Data System (ADS)
Ullrich, Albrecht
2010-09-01
Shrinking structures, advanced optical proximity correction (OPC) and complex measurement strategies continually challenge critical dimension (CD) metrology tools and recipe creation processes. One important quality ensuring task is the control of measurement outlier behavior. Outliers could trigger false positive alarm for specification violations impacting cycle time or potentially yield. Constant high level of outliers not only deteriorates cycle time but also puts unnecessary stress on tool operators leading eventually to human errors. At tool level the sources of outliers are natural variations (e.g. beam current etc.), drifts, contrast conditions, focus determination or pattern recognition issues, etc. Some of these can result from suboptimal or even wrong recipe settings, like focus position or measurement box size. Such outliers, created by an automatic recipe creation process faced with more complicated structures, would manifest itself rather as systematic variation of measurements than the one caused by 'pure' tool variation. I analyzed several statistical methods to detect outliers. These range from classical outlier tests for extrema, robust metrics like interquartile range (IQR) to methods evaluating the distribution of different populations of measurement sites, like the Cochran test. The latter suits especially the detection of systematic effects. The next level of outlier detection entwines additional information about the mask and the manufacturing process with the measurement results. The methods were reviewed for measured variations assumed to be normally distributed with zero mean but also for the presence of a statistically significant spatial process signature. I arrive at the conclusion that intelligent outlier detection can influence the efficiency and cycle time of CD metrology greatly. In combination with process information like target, typical platform variation and signature, one can tailor the detection to the needs of the photomask at hand. By monitoring the outlier behavior carefully, weaknesses of the automatic recipe creation process can be spotted.
NASA Astrophysics Data System (ADS)
Mareeswaran, S.; Asaithambi, T.
2016-10-01
Now a day's crystals are the pillars of current technology. Crystals are applied in various fields like fiber optic communications, electronic industry, photonic industry, etc. Crystal growth is an interesting and innovative field in the subject of physics, chemistry, material science, metallurgy, chemical engineering, mineralogy and crystallography. In recent decades optically good quality of pure and metal doped KDP crystals have been grown by gel growth method in room temperature and its characterizations were studied. Gel method is a very simple and one of the easiest methods among the various crystal growth methods. Potassium dihydrogen phosphate KH2PO4 (KDP) continues to be an interesting material both academically and technologically. KDP is a delegate of hydrogen bonded materials which possess very good electrical and nonlinear optical properties in addition to interesting electro-optic properties. We made an attempt to grow pure and titanium oxide doped KDP crystals with various doping concentrations (0.002, 0.004, 0.006, 0.008 and 0.010) using gel method. The grown crystals were collected after 20 days. We get crystals with good quality and shaped crystals. The dc electrical conductivity (resistance, capacitance and dielectric constant) values of the above grown crystals were measured at two different frequencies (1KHz and 100 Hz) with a temperature range of 500C to 1200C using simple two probe setup with Q band digital LCR meter present in our lab. The electrical conductivity increases with the increase of temperature. Dielectric constants value of titanium oxide doped KDP crystal was slightly decreased compared with pure KDP crystals. Results were discussed in details.
Henriques, C. A. O.; Freitas, E. D. C.; Azevedo, C. D. R.; ...
2017-09-12
Xe–CO 2 mixtures are important alternatives to pure xenon in Time Projection Chambers (TPC) based on secondary scintillation (electroluminescence) signal amplification with applications in the important field of rare event detection such as directional dark matter, double electron capture and double beta decay detection. The addition of CO 2 to pure xenon at the level of 0.05–0.1% can reduce significantly the scale of electron diffusion from 10 mm / √m to 2.5mm / √m, with high impact on the discrimination efficiency of the events through pattern recognition of the topology of primary ionization trails. We have measured the electroluminescence (EL)more » yield of Xe–CO 2 mixtures, with sub-percent CO 2 concentrations. We demonstrate that the EL production is still high in these mixtures, 70% and 35% relative to that produced in pure xenon, for CO 2 concentrations around 0.05% and 0.1%, respectively. In conclusion, the contribution of the statistical fluctuations in EL production to the energy resolution increases with increasing CO 2 concentration, being smaller than the contribution of the Fano factor for concentrations below 0.1% CO 2.« less
NASA Astrophysics Data System (ADS)
Henriques, C. A. O.; Freitas, E. D. C.; Azevedo, C. D. R.; González-Díaz, D.; Mano, R. D. P.; Jorge, M. R.; Fernandes, L. M. P.; Monteiro, C. M. B.; Gómez-Cadenas, J. J.; Álvarez, V.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carríon, J. V.; Cebrían, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Esteve, R.; Felkai, R.; Ferrario, P.; Ferreira, A. L.; Goldschmidt, A.; Gutiérrez, R. M.; Hauptman, J.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monrabal, F.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Simón, A.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.; NEXT Collaboration
2017-10-01
Xe-CO2 mixtures are important alternatives to pure xenon in Time Projection Chambers (TPC) based on secondary scintillation (electroluminescence) signal amplification with applications in the important field of rare event detection such as directional dark matter, double electron capture and double beta decay detection. The addition of CO2 to pure xenon at the level of 0.05-0.1% can reduce significantly the scale of electron diffusion from 10 mm /√{m} to 2.5 mm /√{m}, with high impact on the discrimination efficiency of the events through pattern recognition of the topology of primary ionization trails. We have measured the electroluminescence (EL) yield of Xe-CO2 mixtures, with sub-percent CO2 concentrations. We demonstrate that the EL production is still high in these mixtures, 70% and 35% relative to that produced in pure xenon, for CO2 concentrations around 0.05% and 0.1%, respectively. The contribution of the statistical fluctuations in EL production to the energy resolution increases with increasing CO2 concentration, being smaller than the contribution of the Fano factor for concentrations below 0.1% CO2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henriques, C. A. O.; Freitas, E. D. C.; Azevedo, C. D. R.
Xe–CO 2 mixtures are important alternatives to pure xenon in Time Projection Chambers (TPC) based on secondary scintillation (electroluminescence) signal amplification with applications in the important field of rare event detection such as directional dark matter, double electron capture and double beta decay detection. The addition of CO 2 to pure xenon at the level of 0.05–0.1% can reduce significantly the scale of electron diffusion from 10 mm / √m to 2.5mm / √m, with high impact on the discrimination efficiency of the events through pattern recognition of the topology of primary ionization trails. We have measured the electroluminescence (EL)more » yield of Xe–CO 2 mixtures, with sub-percent CO 2 concentrations. We demonstrate that the EL production is still high in these mixtures, 70% and 35% relative to that produced in pure xenon, for CO 2 concentrations around 0.05% and 0.1%, respectively. In conclusion, the contribution of the statistical fluctuations in EL production to the energy resolution increases with increasing CO 2 concentration, being smaller than the contribution of the Fano factor for concentrations below 0.1% CO 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisen, Supriya; Mishra, Ashutosh; Jarabana, Kanaka M.
2016-05-23
In this work, Barium Titanate (BaTiO{sub 3}) powders were synthesized via Sol-Gel auto combustion method using citric acid as a chelating agent. We study the behavior of ferroelectric and dielectric properties of pure and doped BaTiO{sub 3} on different concentration. To understand the phase and structure of the powder calcined at 900°C were characterized by X-ray Diffraction shows that tetragonal phase is dominant for pure and doped BTO and data fitted by Rietveld Refinement. Electric and Dielectric properties were characterized by P-E Hysteresis and Dielectric measurement. In P-E measurement ferroelectric loop tracer applied for different voltage. The temperature dependant dielectricmore » constant behavior was observed as a function of frequency recorded on hp-Hewlett Packard 4192A, LF impedance, 5Hz-13Hz analyzer.« less
Bharath, Nagaraj; Sowmya, Nagur Karibasappa; Mehta, Dhoom Singh
2015-01-01
The aim of this study was to evaluate the antibacterial activity of pure green coffee bean extract on periodonto pathogenic bacteria Porphyromonas gingivalis (Pg), Prevotella intermedia (Pi), Fusobacterium nucleatum (Fn) and Aggregatibacter actinomycetemcomitans (Aa). Minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBC) were used to assess the antibacterial effect of pure green coffee bean extract against periodonto pathogenic bacteria by micro dilution method and culture method, respectively. MIC values of Pg, Pi and Aa were 0.2 μg/ml whereas Fn showed sensitive at concentration of 3.125 μg/ml. MBC values mirrors the values same as that of MIC. Antimicrobial activity of pure green coffee bean extract against Pg, Pi, Fn and Aa suggests that it could be recommended as an adjunct to mechanical therapy in the management of periodontal disease.
Jiang, Zhenzuo; Liu, Yanan; Zhu, Yan; Yang, Jing; Sun, Lili; Chai, Xin; Wang, Yuefei
2016-09-01
Human milk, infant formula, pure milk and fermented milk as food products or dietary supplements provide a range of nutrients required to both infants and adults. Recently, a growing body of evidence has revealed the beneficial roles of short-chain fatty acids (SCFAs), a subset of fatty acids produced from the fermentation of dietary fibers by gut microbiota. The objective of this study was to establish a chromatographic fingerprint technique to investigate SCFAs in human milk and dairy products by gas chromatography coupled with mass spectrometry. The multivariate method for principal component analysis assessed differences between milk types. Human milk, infant formula, pure milk and fermented milk were grouped independently, mainly because of differences in formic acid, acetic acid, propionic acid and hexanoic acid levels. This method will be important for the assessment of SCFAs in human milk and various dairy products.
Binary gas mixture adsorption-induced deformation of microporous carbons by Monte Carlo simulation.
Cornette, Valeria; de Oliveira, J C Alexandre; Yelpo, Víctor; Azevedo, Diana; López, Raúl H
2018-07-15
Considering the thermodynamic grand potential for more than one adsorbate in an isothermal system, we generalize the model of adsorption-induced deformation of microporous carbons developed by Kowalczyk et al. [1]. We report a comprehensive study of the effects of adsorption-induced deformation of carbonaceous amorphous porous materials due to adsorption of carbon dioxide, methane and their mixtures. The adsorption process is simulated by using the Grand Canonical Monte Carlo (GCMC) method and the calculations are then used to analyze experimental isotherms for the pure gases and mixtures with different molar fraction in the gas phase. The pore size distribution determined from an experimental isotherm is used for predicting the adsorption-induced deformation of both pure gases and their mixtures. The volumetric strain (ε) predictions from the GCMC method are compared against relevant experiments with good agreement found in the cases of pure gases. Copyright © 2018 Elsevier Inc. All rights reserved.
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
Polarization-dependent optical reflection ultrasonic detection
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyi; Huang, Zhiyu; Wang, Guohe; Li, Wenzhao; Li, Changhui
2017-03-01
Although ultrasound transducers based on commercial piezoelectric-material have been widely used, they generally have limited bandwidth centered at the resonant frequency. Currently, several pure-optical ultrasonic detection methods have gained increasing interest due to their wide bandwidth and high sensitivity. However, most of them require customized components (such as micro-ring, SPR, Fabry-Perot film, etc), which limit their broad implementations. In this study, we presented a simple pure-optical ultrasound detection method, called "Polarization-dependent Reflection Ultrasonic Detection" (PRUD). It detects the intensity difference between two polarization components of the probe beam that is modulated by ultrasound waves. PRUD detect the two components by using a balanced detector, which effectively suppressed much of the unwanted noise. We have achieved the sensitivity (noise equivalent pressure) to be 1.7kPa, and this can be further improved. In addition, like many other pure-optical ultrasonic detection methods, PRUD also has a flat and broad bandwidth from almost zero to over 100MHz. Besides theoretical analysis, we did a phantom study by imaging a tungsten filament to demonstrate the performance of PRUD. We believe this simple and economic method will attract both researchers and engineers in optical and ultrasound fields.
Vajna, Balázs; Farkas, Attila; Pataki, Hajnalka; Zsigmond, Zsolt; Igricz, Tamás; Marosi, György
2012-01-27
Chemical imaging is a rapidly emerging analytical method in pharmaceutical technology. Due to the numerous chemometric solutions available, characterization of pharmaceutical samples with unknown components present has also become possible. This study compares the performance of current state-of-the-art curve resolution methods (multivariate curve resolution-alternating least squares, positive matrix factorization, simplex identification via split augmented Lagrangian and self-modelling mixture analysis) in the estimation of pure component spectra from Raman maps of differently manufactured pharmaceutical tablets. The batches of different technologies differ in the homogeneity level of the active ingredient, thus, the curve resolution methods are tested under different conditions. An empirical approach is shown to determine the number of components present in a sample. The chemometric algorithms are compared regarding the number of detected components, the quality of the resolved spectra and the accuracy of scores (spectral concentrations) compared to those calculated with classical least squares, using the true pure component (reference) spectra. It is demonstrated that using appropriate multivariate methods, Raman chemical imaging can be a useful tool in the non-invasive characterization of unknown (e.g. illegal or counterfeit) pharmaceutical products. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, A.; Siegel, Edward Carl-Ludwig
2011-03-01
Numbers: primality/indivisibility/non-factorization versus compositeness/divisibility/ factorization, often in tandem but not always, provocatively close analogy to nuclear-physics: (2 + 1)=(fusion)=3; (3+1)=(fission)=4[=2 x 2]; (4+1)=(fusion)=5; (5 +1)=(fission)=6[=2 x 3]; (6 + 1)=(fusion)=7; (7+1)=(fission)=8[= 2 x 4 = 2 x 2 x 2]; (8 + 1) =(non: fission nor fusion)= 9[=3 x 3]; then ONLY composites' Islands of fusion-INstability: 8, 9, 10; then 14, 15, 16, ... Could inter-digit Feshbach-resonances exist??? Possible applications to: quantum-information/ computing non-Shore factorization, millennium-problem Riemann-hypotheses proof as Goodkin BEC intersection with graph-theory "short-cut" method: Rayleigh(1870)-Polya(1922)-"Anderson"(1958)-localization, Goldbach-conjecture, financial auditing/accounting as quantum-statistical-physics; ...abound!!! Watkins [www.secamlocal.ex.ac.uk/people/staff/mrwatkin/] "Number-Theory in Physics" many interconnections: "pure"-maths number-theory to physics including Siegel [AMS Joint Mtg.(2002)-Abs.# 973-60-124] inversion of statistics on-average digits' Newcomb(1881)-Weyl(14-16)-Benford(38)-law to reveal both the quantum and BEQS (digits = bosons = digits:"spinEless-boZos"). 1881 1885 1901 1905 1925 < 1927, altering quantum-theory history!!!
Qualitatively Assessing Randomness in SVD Results
NASA Astrophysics Data System (ADS)
Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.
2012-12-01
Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.
Vasconcelos, Karla Anacleto de; Frota, Silvana Maria Monte Coelho; Ruffino-Netto, Antonio; Kritski, Afrânio Lineu
2018-04-01
To investigate early detection of amikacin-induced ototoxicity in a population treated for multidrug-resistant tuberculosis (MDR-TB), by means of three different tests: pure-tone audiometry (PTA); high-frequency audiometry (HFA); and distortion-product otoacoustic emission (DPOAE) testing. This was a longitudinal prospective cohort study involving patients aged 18-69 years with a diagnosis of MDR-TB who had to receive amikacin for six months as part of their antituberculosis drug regimen for the first time. Hearing was assessed before treatment initiation and at two and six months after treatment initiation. Sequential statistics were used to analyze the results. We included 61 patients, but the final population consisted of 10 patients (7 men and 3 women) because of sequential analysis. Comparison of the test results obtained at two and six months after treatment initiation with those obtained at baseline revealed that HFA at two months and PTA at six months detected hearing threshold shifts consistent with ototoxicity. However, DPOAE testing did not detect such shifts. The statistical method used in this study makes it possible to conclude that, over the six-month period, amikacin-associated hearing threshold shifts were detected by HFA and PTA, and that DPOAE testing was not efficient in detecting such shifts.
Additive interaction between heterogeneous environmental ...
BACKGROUND Environmental exposures often occur in tandem; however, epidemiological research often focuses on singular exposures. Statistical interactions among broad, well-characterized environmental domains have not yet been evaluated in association with health. We address this gap by conducting a county-level cross-sectional analysis of interactions between Environmental Quality Index (EQI) domain indices on preterm birth in the Unites States from 2000-2005.METHODS: The EQI, a county-level index constructed for the 2000-2005 time period, was constructed from five domain-specific indices (air, water, land, built and sociodemographic) using principal component analyses. County-level preterm birth rates (n=3141) were estimated using live births from the National Center for Health Statistics. Linear regression was used to estimate prevalence differences (PD) and 95% confidence intervals (CI) comparing worse environmental quality to the better quality for each model for a) each individual domain main effect b) the interaction contrast and c) the two main effects plus interaction effect (i.e. the “net effect”) to show departure from additive interaction for the all U.S counties. Analyses were also performed for subgroupings by four urban/rural strata. RESULTS: We found the suggestion of antagonistic interactions but no synergism, along with several purely additive (i.e., no interaction) associations. In the non-stratified model, we observed antagonistic interac
NASA Astrophysics Data System (ADS)
Luo, JunYan; Yan, Yiying; Huang, Yixiao; Yu, Li; He, Xiao-Ling; Jiao, HuJun
2017-01-01
We investigate the noise correlations of spin and charge currents through an electron spin resonance (ESR)-pumped quantum dot, which is tunnel coupled to three electrodes maintained at an equivalent chemical potential. A recursive scheme is employed with inclusion of the spin degrees of freedom to account for the spin-resolved counting statistics in the presence of non-Markovian effects due to coupling with a dissipative heat bath. For symmetric spin-up and spin-down tunneling rates, an ESR-induced spin flip mechanism generates a pure spin current without an accompanying net charge current. The stochastic tunneling of spin carriers, however, produces universal shot noises of both charge and spin currents, revealing the effective charge and spin units of quasiparticles in transport. In the case of very asymmetric tunneling rates for opposite spins, an anomalous relationship between noise autocorrelations and cross correlations is revealed, where super-Poissonian autocorrelation is observed in spite of a negative cross correlation. Remarkably, with strong dissipation strength, non-Markovian memory effects give rise to a positive cross correlation of the charge current in the absence of a super-Poissonian autocorrelation. These unique noise features may offer essential methods for exploiting internal spin dynamics and various quasiparticle tunneling processes in mesoscopic transport.
Wang, Fang-Xu; Yuan, Jian-Chao; Kang, Li-Ping; Pang, Xu; Yan, Ren-Yi; Zhao, Yang; Zhang, Jie; Sun, Xin-Guang; Ma, Bai-Ping
2016-09-10
An ultra high-performance liquid chromatography quadrupole time-of-flight tandem mass spectrometry approach coupled with multivariate statistical analysis was established and applied to rapidly distinguish the chemical differences between fibrous root and rhizome of Anemarrhena asphodeloides. The datasets of tR-m/z pairs, ion intensity and sample code were processed by principal component analysis and orthogonal partial least squares discriminant analysis. Chemical markers could be identified based on their exact mass data, fragmentation characteristics, and retention times. And the new compounds among chemical markers could be isolated rapidly guided by the ultra high-performance liquid chromatography quadrupole time-of-flight tandem mass spectrometry and their definitive structures would be further elucidated by NMR spectra. Using this approach, twenty-four markers were identified on line including nine new saponins and five new steroidal saponins of them were obtained in pure form. The study validated this proposed approach as a suitable method for identification of the chemical differences between various medicinal parts in order to expand medicinal parts and increase the utilization rate of resources. Copyright © 2016 Elsevier B.V. All rights reserved.
Hybrid modeling as a QbD/PAT tool in process development: an industrial E. coli case study.
von Stosch, Moritz; Hamelink, Jan-Martijn; Oliveira, Rui
2016-05-01
Process understanding is emphasized in the process analytical technology initiative and the quality by design paradigm to be essential for manufacturing of biopharmaceutical products with consistent high quality. A typical approach to developing a process understanding is applying a combination of design of experiments with statistical data analysis. Hybrid semi-parametric modeling is investigated as an alternative method to pure statistical data analysis. The hybrid model framework provides flexibility to select model complexity based on available data and knowledge. Here, a parametric dynamic bioreactor model is integrated with a nonparametric artificial neural network that describes biomass and product formation rates as function of varied fed-batch fermentation conditions for high cell density heterologous protein production with E. coli. Our model can accurately describe biomass growth and product formation across variations in induction temperature, pH and feed rates. The model indicates that while product expression rate is a function of early induction phase conditions, it is negatively impacted as productivity increases. This could correspond with physiological changes due to cytoplasmic product accumulation. Due to the dynamic nature of the model, rational process timing decisions can be made and the impact of temporal variations in process parameters on product formation and process performance can be assessed, which is central for process understanding.
Bigras, Gilbert
2012-06-01
Color deconvolution relies on determination of unitary optical density vectors (OD(3D)) derived from pure constituent stains initially defined as intensity vectors in RGB space. OD(3D) can be defined in polar coordinates (phi, theta, radius); always being equal to one, radius can be ignored. Easier handling of unitary optical density 2D vectors (OD(2D)) is shown. OD(2D) pure stains used in anatomical pathology were assessed as centroid values (phi, theta) with a measure of variance: inertia based on arc lengths between centroid value and sampled points. These variables were plotted on a stereographic projection plane. In order to assess pure stains OD(2D), different methods of sampling RGB pixels were tested and compared: (2) direct sampling of nuclei from preparations using (a) composite H&E and (b) hematoxylin only and (2) for any pure stain RGB image, different associated 8-bit masks (saturation, brightness and RGB average) were used for sampling and compared. Behaviors of phi, theta and inertia were obtained by moving threshold in 8-bit mask histograms. Phi and theta stability were tested against variable light intensity during image acquisition and by using 2 different image acquisition systems. The more saturated RGB pixels are, the more stable phi, theta and inertia values are obtained. Different commercial hematoxylins have distinct OD(2D) characteristics. UltraView DAB stain shows high inertia and is angularly closer to usual counterstains than ultraView Red stain, which also has a lower inertia. Superior accuracy is expected from the latter stain. Phi and theta OD(2D) values are sensitive to light intensity variation, to the used imaging system and to the used objectives. An ImageJ plugin was designed to plot and interactively modify OD(2D) values with instant update of color deconvolution allowing heuristic segmentation. Utilization of polar OD(2D) eases statistical characterization of OD(3D) vectors: conditions of optimal sampling were demonstrated and various factors influencing OD(2D) stability were explored. Stereographic projection plane allows intuitive visualization of OD(3D) vectors as well as heuristic vectorial modification. All findings are not restricted to anatomical pathology but can be applied to bright field microscopy and subtractive color applications in general.
Statistical analysis of EGFR structures' performance in virtual screening
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Xiang; Dong, Zigang
2015-11-01
In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening.
Rossell, David
2016-01-01
Big Data brings unprecedented power to address scientific, economic and societal issues, but also amplifies the possibility of certain pitfalls. These include using purely data-driven approaches that disregard understanding the phenomenon under study, aiming at a dynamically moving target, ignoring critical data collection issues, summarizing or preprocessing the data inadequately and mistaking noise for signal. We review some success stories and illustrate how statistical principles can help obtain more reliable information from data. We also touch upon current challenges that require active methodological research, such as strategies for efficient computation, integration of heterogeneous data, extending the underlying theory to increasingly complex questions and, perhaps most importantly, training a new generation of scientists to develop and deploy these strategies. PMID:27722040
Pinning time statistics for vortex lines in disordered environments.
Dobramysl, Ulrich; Pleimling, Michel; Täuber, Uwe C
2014-12-01
We study the pinning dynamics of magnetic flux (vortex) lines in a disordered type-II superconductor. Using numerical simulations of a directed elastic line model, we extract the pinning time distributions of vortex line segments. We compare different model implementations for the disorder in the surrounding medium: discrete, localized pinning potential wells that are either attractive and repulsive or purely attractive, and whose strengths are drawn from a Gaussian distribution; as well as continuous Gaussian random potential landscapes. We find that both schemes yield power-law distributions in the pinned phase as predicted by extreme-event statistics, yet they differ significantly in their effective scaling exponents and their short-time behavior.
ERIC Educational Resources Information Center
Stiles, Derek J.; Bentler, Ruth A.; McGregor, Karla K.
2012-01-01
Purpose: To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). Method: School-age CHA and age-matched children with…
Isolation of high purity americium metal via distillation
NASA Astrophysics Data System (ADS)
Squires, Leah N.; King, James A.; Fielding, Randall S.; Lessing, Paul
2018-03-01
Pure americium metal is a crucial component for the fabrication of transmutation fuels. Unfortunately, americium in pure metal form is not available; however, a number of mixed metals and mixed oxides that include americium are available. In this manuscript a method is described to obtain high purity americium metal from a mixture of americium and neptunium metals with lead impurity via distillation.
Effect of dissociation on thermodynamic properties of pure diatomic gases
NASA Technical Reports Server (NTRS)
Woolley, Harold W
1955-01-01
A graphical method is described by which the enthalpy, entropy, and compressibility factor for the equilibrium mixture of atoms and diatomic molecules for pure gaseous elements may be obtained and shown for any dissociating element for which the necessary data exist. Results are given for hydrogen, oxygen, and nitrogen. The effect of dissociation on the heat capacity is discussed briefly.
Chen, Pin; Toubal, Malika; Carlier, Julien; Harmand, Souad; Nongaillard, Bertrand; Bigerelle, Maxence
2016-09-27
Evaporation of droplets of three pure liquids (water, 1-butanol, and ethanol) and four binary solutions (5 wt % 1-butanol-water-based solution and 5, 25, and 50 wt % ethanol-water-based solutions) deposited on hydrophobic silicon was investigated. A drop shape analyzer was used to measure the contact angle, diameter, and volume of the droplets. An infrared camera was used for infrared thermal mapping of the droplet's surface. An acoustic high-frequency echography technique was, for the first time, applied to track the alcohol concentration in a binary-solution droplet. Evaporation of pure alcohol droplets was executed at different values of relative humidity (RH), among which the behavior of pure ethanol evaporation was notably influenced by the ambient humidity as a result of high hygrometry. Evaporation of droplets of water and binary solutions was performed at a temperature of 22 °C and a mean humidity of approximately 50%. The exhaustion times of alcohol in the droplets estimated by the acoustic method and the visual method were similar for the water-1-butanol mixture; however, the time estimated by the acoustic method was longer when compared with that estimated by the visual method for the water-ethanol mixture due to the residual ethanol at the bottom of the droplet.
Vitali, Rachel V.; Cain, Stephen M.; Zaferiou, Antonia M.; Ojeda, Lauro V.; Perkins, Noel C.
2017-01-01
Three-dimensional rotations across the human knee serve as important markers of knee health and performance in multiple contexts including human mobility, worker safety and health, athletic performance, and warfighter performance. While knee rotations can be estimated using optical motion capture, that method is largely limited to the laboratory and small capture volumes. These limitations may be overcome by deploying wearable inertial measurement units (IMUs). The objective of this study is to present a new IMU-based method for estimating 3D knee rotations and to benchmark the accuracy of the results using an instrumented mechanical linkage. The method employs data from shank- and thigh-mounted IMUs and a vector constraint for the medial-lateral axis of the knee during periods when the knee joint functions predominantly as a hinge. The method is carefully validated using data from high precision optical encoders in a mechanism that replicates 3D knee rotations spanning (1) pure flexion/extension, (2) pure internal/external rotation, (3) pure abduction/adduction, and (4) combinations of all three rotations. Regardless of the movement type, the IMU-derived estimates of 3D knee rotations replicate the truth data with high confidence (RMS error < 4° and correlation coefficient r≥0.94). PMID:28846613
Sterilization by pure oxygen plasma and by oxygen-hydrogen peroxide plasma: an efficacy study.
Boscariol, M R; Moreira, A J; Mansano, R D; Kikuchi, I S; Pinto, T J A
2008-04-02
Plasma is an innovative sterilization method characterized by a low toxicity to operators and patients, and also by its operation at temperatures close to room temperatures. The use of different parameters for this method of sterilization and the corresponding results were analyzed in this study. A low-pressure inductive discharge was used to study the plasma sterilization processes. Oxygen and a mixture of oxygen and hydrogen peroxide were used as plasma source gases. The efficacy of the processes using different combinations of parameters such as plasma-generation method, type of gas, pressure, gas flow rate, temperature, power, and exposure time was evaluated. Two phases were developed for the processes, one using pure oxygen and the other a mixture of gases. Bacillus subtilis var. niger ATCC 9372 (Bacillus atrophaeus) spores inoculated on glass coverslips were used as biological indicators to evaluate the efficacy of the processes. All cycles were carried out in triplicate for different sublethal exposure times to calculate the D value by the enumeration method. The pour-plate technique was used to quantify the spores. D values of between 8 and 3 min were obtained. Best results were achieved at high power levels (350 and 400 W) using pure oxygen, showing that plasma sterilization is a promising alternative to other sterilization methods.
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks
Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.
2015-01-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406
Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.
Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M
2015-09-01
Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.
NASA Astrophysics Data System (ADS)
Abdel-Ghany, Maha F.; Hussein, Lobna A.; Ayad, Miriam F.; Youssef, Menatallah M.
2017-01-01
New, simple, accurate and sensitive UV spectrophotometric and chemometric methods have been developed and validated for determination of Entacapone (ENT), Levodopa (LD) and Carbidopa (CD) in ternary mixture. Method A is a derivative ratio spectra zero-crossing spectrophotometric method which allows the determination of ENT in the presence of both LD and CD by measuring the peak amplitude at 249.9 nm in the range of 1-20 μg mL- 1. Method B is a double divisor-first derivative of ratio spectra method, used for determination of ENT, LD and CD at 245, 239 and 293 nm, respectively. Method C is a mean centering of ratio spectra which allows their determination at 241, 241.6 and 257.1 nm, respectively. Methods B and C could successfully determine the studied drugs in concentration ranges of 1-20 μg mL- 1 for ENT and 10-90 μg mL- 1 for both LD and CD. Methods D and E are principal component regression and partial least-squares, respectively, used for the simultaneous determination of the studied drugs by using seventeen mixtures as calibration set and eight mixtures as validation set. The developed methods have the advantage of simultaneous determination of the cited components without any pre-treatment. All the results were statistically compared with the reported methods, where no significant difference was observed. The developed methods were satisfactorily applied to the analysis of the investigated drugs in their pure form and in pharmaceutical dosage forms.
Scott, Jennifer L; Dawkins, Sarah; Quinn, Michael G; Sanderson, Kristy; Elliott, Kate-Ellen J; Stirling, Christine; Schüz, Ben; Robinson, Andrew
2016-08-01
Face-to-face delivery of CBT is not always optimal or practical for informal dementia carers (DCs). Technology-based formats of CBT delivery (TB-CBT) have been developed with the aim to improve client engagement and accessibility, and lower delivery costs, and offers potential benefits for DCs. However, research of TB-CBT for DCs has maintained heavy reliance on therapist involvement. The efficacy of pure TB-CBT interventions for DCs is not currently established Methods: A systematic review of trials of pure TB-CBT intervention for DCs from 1995 was conducted. PsycINFO, Cochrane Reviews, Scopus and MedLine databases were searched using key terms related to CBT, carers and dementia. Four hundred and forty two articles were identified, and inclusion/exclusion criteria were applied; studies were only retained if quantitative data was available, and there was no active therapist contact. Four articles were retained; two randomized and two waitlist control trials. Methodological and reporting quality was assessed. Meta-analyses were conducted for the outcome measures of caregiver depression. Meta-analysis revealed small significant post-intervention effects of pure TB-CBT interventions for depression; equivalent to face-to-face interventions. However, there is no evidence regarding long-term efficacy of pure TB-CBT for DCs. The systematic review further identified critical methodological and reporting shortcomings pertaining to these trials Conclusions: Pure TB-CBT interventions may offer a convenient, economical method for delivering psychological interventions to DCs. Future research needs to investigate their long-term efficacy, and consider potential moderating and mediating factors underpinning the mechanisms of effect of these programs. This will help to provide more targeted interventions to this underserviced population.
Wang, Weidong; Bai, Liwen; Yang, Chenguang; Fan, Kangqi; Xie, Yong; Li, Minglin
2018-01-31
Based on the density functional theory (DFT), the electronic properties of O-doped pure and sulfur vacancy-defect monolayer WS₂ are investigated by using the first-principles method. For the O-doped pure monolayer WS₂, four sizes (2 × 2 × 1, 3 × 3 × 1, 4 × 4 × 1 and 5 × 5 × 1) of supercell are discussed to probe the effects of O doping concentration on the electronic structure. For the 2 × 2 × 1 supercell with 12.5% O doping concentration, the band gap of O-doped pure WS₂ is reduced by 8.9% displaying an indirect band gap. The band gaps in 3 × 3 × 1 and 4 × 4 × 1 supercells are both opened to some extent, respectively, for 5.55% and 3.13% O doping concentrations, while the band gap in 5 × 5 × 1 supercell with 2.0% O doping concentration is quite close to that of the pure monolayer WS₂. Then, two typical point defects, including sulfur single-vacancy (V S ) and sulfur divacancy (V 2S ), are introduced to probe the influences of O doping on the electronic properties of WS₂ monolayers. The observations from DFT calculations show that O doping can broaden the band gap of monolayer WS₂ with V S defect to a certain degree, but weaken the band gap of monolayer WS₂ with V 2S defect. Doping O element into either pure or sulfur vacancy-defect monolayer WS₂ cannot change their band gaps significantly, however, it still can be regarded as a potential method to slightly tune the electronic properties of monolayer WS₂.
Wu, Xiu-Jun; Zhang, Meng-Liang; Cui, Xiang-Yong; Gao, Feng; He, Qun; Li, Xiao-Jiao; Zhang, Ji-Wen; Fawcett, J Paul; Gu, Jing-Kai
2012-01-06
Escin Ia and isoescin Ia have been traditionally used clinically as the chief active ingredients of escin, a major triterpene saponin isolated from horse chestnut (Aesculus hippocastanum) seeds for the treatment of chronic venous insufficiency, hemorrhoids, inflammation and edema. To establish a sensitive LC-MS/MS method and investigate the pharmacokinetic properties of escin Ia and isoescin Ia in rats and the pharmacokinetics difference of sodium escinate with pure escin Ia and isoescin Ia. The absolute bioavailability of escin Ia and isoescin Ia and the bidirectional interconversion of them in vivo were also scarcely reported. Wister rats were administrated an intravenous (i.v.) dose (1.7 mg/kg) of sodium escinate (corresponding to 0.5mg/kg of escin Ia and 0.5mg/kg of isoescin Ia, respectively) and an i.v. dose (0.5mg/kg) or oral dose (4mg/kg) of pure escin Ia or isoescin Ia, respectively. At different time points, the concentrations of escin Ia and isoescin Ia in rat plasma were determined by LC-MS/MS method. Main pharmacokinetic parameters including t(1/2), MRT, CL, V(d), AUC and F were estimated by non-compartmental analysis using the TopFit 2.0 software package (Thomae GmbH, Germany) and statistical analysis was performed using the Student's t-test with P<0.05 as the level of significance. After administration of sodium escinate, the t(1/2) and MRT values for both escin Ia and isoescin Ia were larger than corresponding values for the compounds given alone. Absorption of escin Ia and isoescin Ia was very low with F values both <0.25%. Escin Ia and isoescin Ia were found to form the other isomer in vivo with the conversion of escin Ia to isoescin Ia being much extensive than from isoescin Ia to escin Ia. Comparison of the pharmacokinetics of escin Ia and isoescin Ia given alone and together in rat suggest that administration of herbal preparations of escin for clinical use may provide longer duration of action than administration of single isomers. The interconversion of escin Ia and isoescin Ia when given alone indicates that administration of one isomer leads to exposure to the other. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-01-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population. PMID:28722705
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses.
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-10-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population.
Discrete-continuous variable structural synthesis using dual methods
NASA Technical Reports Server (NTRS)
Schmit, L. A.; Fleury, C.
1980-01-01
Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.
Alternative Chemical Cleaning Methods for High Level Waste Tanks: Simulant Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudisill, T.; King, W.; Hay, M.
Solubility testing with simulated High Level Waste tank heel solids has been conducted in order to evaluate two alternative chemical cleaning technologies for the dissolution of sludge residuals remaining in the tanks after the exhaustion of mechanical cleaning and sludge washing efforts. Tests were conducted with non-radioactive pure phase metal reagents, binary mixtures of reagents, and a Savannah River Site PUREX heel simulant to determine the effectiveness of an optimized, dilute oxalic/nitric acid cleaning reagent and pure, dilute nitric acid toward dissolving the bulk non-radioactive waste components. A focus of this testing was on minimization of oxalic acid additions duringmore » tank cleaning. For comparison purposes, separate samples were also contacted with pure, concentrated oxalic acid which is the current baseline chemical cleaning reagent. In a separate study, solubility tests were conducted with radioactive tank heel simulants using acidic and caustic permanganate-based methods focused on the “targeted” dissolution of actinide species known to be drivers for Savannah River Site tank closure Performance Assessments. Permanganate-based cleaning methods were evaluated prior to and after oxalic acid contact.« less
Caffeine: a potential complexing agent for solubility and dissolution enhancement of celecoxib.
Shakeel, Faiyaz; Faisal, Mohammed S
2010-01-01
Complexation of caffeine with the drug celecoxib was used to enhance its solubility as well as in vitro dissolution in the present investigation. Caffeine was extracted from tea leaves using the sublimation method. A molecular complex (1:1) of caffeine-celecoxib was prepared using the solubility method. The solubility of celecoxib in distilled water and the caffeine complex was determined using a HPLC method at a wavelength of 250 nm. Dissolution studies of pure celecoxib, a marketed capsule (Celebrex), and the complex were performed using USP dissolution apparatus I for pure celecoxib and the complex and apparatus II for the capsule in distilled water. The highest solubility (48.32 mg/mL) as well as percent dissolution (90.54%) of celecoxib was obtained with the caffeine-celecoxib complex. The results for solubility and dissolution were highly significant as compared to pure celecoxib and the marketed capsule (p < 0.01). These results suggest that caffeine is a promising complexing agent for solubility as well as dissolution enhancement of the poorly soluble drug celecoxib.
NASA Astrophysics Data System (ADS)
Yoo, Jinwon; Choi, Yujun; Cho, Young-Wook; Han, Sang-Wook; Lee, Sang-Yun; Moon, Sung; Oh, Kyunghwan; Kim, Yong-Su
2018-07-01
We present a detailed method to prepare and characterize four-dimensional pure quantum states or ququarts using polarization and time-bin modes of a single-photon. In particular, we provide a simple method to generate an arbitrary pure ququart and fully characterize the state with quantum state tomography. We also verify the reliability of the recipe by showing experimental preparation and characterization of 20 ququart states in mutually unbiased bases. As qudits provide superior properties over qubits in many fundamental tests of quantum physics and applications in quantum information processing, the presented method will be useful for photonic quantum information science.
NASA Technical Reports Server (NTRS)
Nicolaescu, I. I.
1974-01-01
Using echo pulse and resonance rod methods, internal friction in pure aluminum was studied as a function of frequency, hardening temperature, time (internal friction relaxation) and impurity content. These studies led to the conclusion that internal friction in these materials depends strongly on dislocation structure and on elastic interactions between structure defects. It was found experimentally that internal friction relaxation depends on the cooling rate and on the impurity content. Some parameters of the dislocation structure and of the diffusion process were determined. It is shown that the dislocated dependence of internal friction can be used as a method of nondestructive testing of the impurity content of high-purity materials.
Pure Gaussian state generation via dissipation: a quantum stochastic differential equation approach.
Yamamoto, Naoki
2012-11-28
Recently, the complete characterization of a general Gaussian dissipative system having a unique pure steady state was obtained. This result provides a clear guideline for engineering an environment such that the dissipative system has a desired pure steady state such as a cluster state. In this paper, we describe the system in terms of a quantum stochastic differential equation (QSDE) so that the environment channels can be explicitly dealt with. Then, a physical meaning of that characterization, which cannot be seen without the QSDE representation, is clarified; more specifically, the nullifier dynamics of any Gaussian system generating a unique pure steady state is passive. In addition, again based on the QSDE framework, we provide a general and practical method to implement a desired dissipative Gaussian system, which has a structure of quantum state transfer.
Taha, Elham Anwer; Salama, Nahla Nour; Fattah, Laila El-Sayed Abdel
2006-05-01
Two sensitive and selective spectrofluorimetric and spectrophotometric stability-indicating methods have been developed for the determination of some non-steroidal anti-inflammatory oxicam derivatives namely lornoxicam (Lx), tenoxicam (Tx) and meloxicam (Mx) after their complete alkaline hydrolysis. The methods are based on derivatization of alkaline hydrolytic products with 7-chloro-4-nitrobenz-2-oxa-1,3-diazole (NBD-Cl). The products showed an absorption maximum at 460 nm for the three studied drugs and fluorescence emission peak at 535 nm in methanol. The color was stable for at least 48 h. The optimum conditions of the reaction were investigated and it was found that the reaction proceeds quantitatively at pH 8, after heating in a boiling water bath for 30 min. The methods were found to be linear in the ranges of 1-10 microg ml(-1) for Lx and Tx and 0.5-4.0 microg ml(-1) for Mx for spectrophotometric method, while 0.05-1.0 microg ml(-1) for Lx and Tx and 0.025-0.4 microg ml(-1) for Mx for the spectrofluorimetric method. The validity of the methods was assessed according to USP guidelines. Statistical analysis of the results revealed high accuracy and good precision. The suggested procedures could be used for the determination of the above mentioned drugs in pure and dosage forms as well as in the presence of their degradation products.
NASA Astrophysics Data System (ADS)
Salem, A. A.; Barsoum, B. N.; Izake, E. L.
2004-03-01
New spectrophotometric and fluorimetric methods have been developed to determine diazepam, bromazepam and clonazepam (1,4-benzodiazepines) in pure forms, pharmaceutical preparations and biological fluid. The new methods are based on measuring absorption or emission spectra in methanolic potassium hydroxide solution. Fluorimetric methods have proved selective with low detection limits, whereas photometric methods showed relatively high detection limits. Successive applications of developed methods for drugs determination in pharmaceutical preparations and urine samples were performed. Photometric methods gave linear calibration graphs in the ranges of 2.85-28.5, 0.316-3.16, and 0.316-3.16 μg ml -1 with detection limits of 1.27, 0.08 and 0.13 μg ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 2.60, 5.26 and 3.93 and relative standard deviations (R.S.D.s) of 2.79, 2.12 and 2.83, respectively, were obtained. Fluorimetric methods gave linear calibration graphs in the ranges of 0.03-0.34, 0.03-0.32 and 0.03-0.38 μg ml -1 with detection limits of 7.13, 5.67 and 16.47 ng ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 0.29, 4.33 and 5.42 and R.S.D.s of 1.27, 1.96 and 1.14 were obtained, respectively. Statistical Students t-test and F-test have been used and satisfactory results were obtained.
Hassan, Wafaa El-Sayed
2008-08-01
Three rapid, simple, reproducible and sensitive extractive colorimetric methods (A--C) for assaying dothiepin hydrochloride (I) and risperidone (II) in bulk sample and in dosage forms were investigated. Methods A and B are based on the formation of an ion pair complexes with methyl orange (A) and orange G (B), whereas method C depends on ternary complex formation between cobalt thiocyanate and the studied drug I or II. The optimum reaction conditions were investigated and it was observed the calibration curves resulting from the measurements of absorbance concentration relations of the extracted complexes were linear over the concentration range 0.1--12 microg ml(-1) for method A, 0.5--11 mug ml(-1) for method B, and 3.2--80 microg ml(-1) for method C with a relative standard deviation (RSD) of 1.17 and 1.28 for drug I and II, respectively. The molar absorptivity, Sandell sensitivity, Ringbom optimum concentration ranges, and detection and quantification limits for all complexes were calculated and evaluated at maximum wavelengths of 423, 498, and 625 nm, using methods A, B, and C, respectively. The interference from excipients commonly present in dosage forms and common degradation products was studied. The proposed methods are highly specific for the determination of drugs I and II, in their dosage forms applying the standard additions technique without any interference from common excipients. The proposed methods have been compared statistically to the reference methods and found to be simple, accurate (t-test) and reproducible (F-value).
Phase formation and UV luminescence of Gd{sup 3+} doped perovskite-type YScO{sub 3}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimizu, Yuhei; Ueda, Kazushige, E-mail: kueda@che.kyutech.ac.jp
Synthesis of pure and Gd{sup 3+}doped perovskite-type YScO{sub 3} was attempted by a polymerized complex (PC) method and solid state reaction (SSR) method. Crystalline phases and UV luminescence of samples were examined with varying heating temperatures. The perovskite-type single phase was not simply formed in the SSR method, as reported in some literatures, and two cubic C-type phases of starting oxide materials remained forming slightly mixed solid solutions. UV luminescence of Gd{sup 3+} doped samples increased with an increase in heating temperatures and volume of the perovskite-type phase. In contrast, a non-crystalline precursor was crystallized to a single C-type phasemore » at 800 °C in the PC method forming a completely mixed solid solution. Then, the phase of perovskite-type YScO{sub 3} formed at 1200 °C and its single phase was obtained at 1400 °C. It was revealed that high homogeneousness of cations was essential to generate the single perovskite-phase of YScO{sub 3}. Because Gd{sup 3+} ions were also dissolved into the single C-type phase in Gd{sup 3+} doped samples, intense UV luminescence was observed above 800 °C in both C-type phase and perovskite-type phase. - Graphical abstract: A pure perovskite-type YScO{sub 3} phase was successfully synthesized by a polymerized complex (PC) method. The perovskite-type YScO{sub 3} was generated through a solid solution of C-type (Y{sub 0.5}Sc{sub 0.5}){sub 2}O{sub 3} with drastic change of morphology. The PC method enabled a preparation of the single phase of the perovskite-type YScO{sub 3} at lower temperature and in shorter heating time. Gd{sup 3+} doped perovskite-type YScO{sub 3} was found to show a strong sharp UV emission at 314 nm. - Highlights: • Pure YScO{sub 3} phase was successfully synthesized by polymerized complex (PC) method. • Pure perovskite-type YScO{sub 3} phase was generated from pure C-type (Y{sub 0.5}Sc{sub 0.5}){sub 2}O{sub 3} one. • YScO{sub 3} was obtained at lower temperature and in shorter heating time by PC method. • Perovskite-type YScO{sub 3}:Gd{sup 3+} was found to show strong sharp UV emission at 314 nm.« less
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Molecular vibrational energy flow
NASA Astrophysics Data System (ADS)
Gruebele, M.; Bigwood, R.
This article reviews some recent work in molecular vibrational energy flow (IVR), with emphasis on our own computational and experimental studies. We consider the problem in various representations, and use these to develop a family of simple models which combine specific molecular properties (e.g. size, vibrational frequencies) with statistical properties of the potential energy surface and wavefunctions. This marriage of molecular detail and statistical simplification captures trends of IVR mechanisms and survival probabilities beyond the abilities of purely statistical models or the computational limitations of full ab initio approaches. Of particular interest is IVR in the intermediate time regime, where heavy-atom skeletal modes take over the IVR process from hydrogenic motions even upon X H bond excitation. Experiments and calculations on prototype heavy-atom systems show that intermediate time IVR differs in many aspects from the early stages of hydrogenic mode IVR. As a result, IVR can be coherently frozen, with potential applications to selective chemistry.
Loop models, modular invariance, and three-dimensional bosonization
NASA Astrophysics Data System (ADS)
Goldman, Hart; Fradkin, Eduardo
2018-05-01
We consider a family of quantum loop models in 2+1 spacetime dimensions with marginally long-ranged and statistical interactions mediated by a U (1 ) gauge field, both purely in 2+1 dimensions and on a surface in a (3+1)-dimensional bulk system. In the absence of fractional spin, these theories have been shown to be self-dual under particle-vortex duality and shifts of the statistical angle of the loops by 2 π , which form a subgroup of the modular group, PSL (2 ,Z ) . We show that careful consideration of fractional spin in these theories completely breaks their statistical periodicity and describe how this occurs, resolving a disagreement with the conformal field theories they appear to approach at criticality. We show explicitly that incorporation of fractional spin leads to loop model dualities which parallel the recent web of (2+1)-dimensional field theory dualities, providing a nontrivial check on its validity.
NASA Astrophysics Data System (ADS)
Brizzi, S.; Sandri, L.; Funiciello, F.; Corbi, F.; Piromallo, C.; Heuret, A.
2018-03-01
The observed maximum magnitude of subduction megathrust earthquakes is highly variable worldwide. One key question is which conditions, if any, favor the occurrence of giant earthquakes (Mw ≥ 8.5). Here we carry out a multivariate statistical study in order to investigate the factors affecting the maximum magnitude of subduction megathrust earthquakes. We find that the trench-parallel extent of subduction zones and the thickness of trench sediments provide the largest discriminating capability between subduction zones that have experienced giant earthquakes and those having significantly lower maximum magnitude. Monte Carlo simulations show that the observed spatial distribution of giant earthquakes cannot be explained by pure chance to a statistically significant level. We suggest that the combination of a long subduction zone with thick trench sediments likely promotes a great lateral rupture propagation, characteristic of almost all giant earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drryl P. Butt; Brian Jaques
Research conducted for this NERI project has advanced the understanding and feasibility of nitride nuclear fuel processing. In order to perform this research, necessary laboratory infrastructure was developed; including basic facilities and experimental equipment. Notable accomplishments from this project include: the synthesis of uranium, dysprosium, and cerium nitrides using a novel, low-cost mechanical method at room temperature; the synthesis of phase pure UN, DyN, and CeN using thermal methods; and the sintering of UN and (Ux, Dy1-x)N (0.7 ≤ X ≤ 1) pellets from phase pure powder that was synthesized in the Advanced Materials Laboratory at Boise State University.
Synthesis of LiMn1.9Ti0.09Si0.01O4 by self-propagating combustion method
NASA Astrophysics Data System (ADS)
Abdullah, Amzar Ahlami; Kamarulzaman, Norlida; Badar, Nurhanna; Aziz, Nor Diyana Abdul
2017-09-01
Cathode materials have been an essential area of research for many decades. In this work, a novel spinel cathode, LiMn1.9Ti0.09Si0.01O4 was prepared via a combustion method using citric acid as a reductant. The objective is to obtain a pure and single phase cubic structured material. The precursors obtained were annealed at 600, 700 and 800 °C for 24 hours. The observed materials were characterized by thermal profiling and X-ray diffraction. Pure and single phase materials are obtained and achieved.
Average fidelity between random quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5
2005-03-01
We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu
2010-01-01
Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
Chemical potential of quasi-equilibrium magnon gas driven by pure spin current.
Demidov, V E; Urazhdin, S; Divinskiy, B; Bessonov, V D; Rinkevich, A B; Ustinov, V V; Demokritov, S O
2017-11-17
Pure spin currents provide the possibility to control the magnetization state of conducting and insulating magnetic materials. They allow one to increase or reduce the density of magnons, and achieve coherent dynamic states of magnetization reminiscent of the Bose-Einstein condensation. However, until now there was no direct evidence that the state of the magnon gas subjected to spin current can be treated thermodynamically. Here, we show experimentally that the spin current generated by the spin-Hall effect drives the magnon gas into a quasi-equilibrium state that can be described by the Bose-Einstein statistics. The magnon population function is characterized either by an increased effective chemical potential or by a reduced effective temperature, depending on the spin current polarization. In the former case, the chemical potential can closely approach, at large driving currents, the lowest-energy magnon state, indicating the possibility of spin current-driven Bose-Einstein condensation.
On-chip low loss heralded source of pure single photons.
Spring, Justin B; Salter, Patrick S; Metcalf, Benjamin J; Humphreys, Peter C; Moore, Merritt; Thomas-Peter, Nicholas; Barbieri, Marco; Jin, Xian-Min; Langford, Nathan K; Kolthammer, W Steven; Booth, Martin J; Walmsley, Ian A
2013-06-03
A key obstacle to the experimental realization of many photonic quantum-enhanced technologies is the lack of low-loss sources of single photons in pure quantum states. We demonstrate a promising solution: generation of heralded single photons in a silica photonic chip by spontaneous four-wave mixing. A heralding efficiency of 40%, corresponding to a preparation efficiency of 80% accounting for detector performance, is achieved due to efficient coupling of the low-loss source to optical fibers. A single photon purity of 0.86 is measured from the source number statistics without narrow spectral filtering, and confirmed by direct measurement of the joint spectral intensity. We calculate that similar high-heralded-purity output can be obtained from visible to telecom spectral regions using this approach. On-chip silica sources can have immediate application in a wide range of single-photon quantum optics applications which employ silica photonics.
Acidity of fine sulfate particles at Great Smokey Mountains National Park
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, D.; Malm, W.C.; Kreidenweis, S.
1995-12-31
The acidity of ambient particles is of interest from the perspectives of human health, visibility, and ecology. This paper reports on the acidity of fine (< 2.5{mu}m) particles measured during August 1994 at Look Rock observation tower in Great Smokey Mountains National Park. This site is located at latitude 35{degrees} 37 feet 56 inches, longitude 83{degrees} 56 feet 32 inches, and at an elevation of 808m above sea level. All samples were collected using the IMPROVE (Interagency Monitoring of Protected Visual Environments) sampler. The sampling periods included: (1) 4-hour samples collected three times daily with starting times of 8:00 AM,more » 12:00 noon, and 4:00 PM; (2) 12-hour samples collected twice daily with starting times of 8:00 AM and 8:00 PM (all times reported are eastern daylight savings time). The IMPROVE sampler, collecting 4-hour samples, employed a citric acid/glycerol coated annular denuder to remove ammonia gas while the 12-hour sampler did not use a citric acid denuder. The intensive monitoring effort, conducted during August 1994, showed that: (1) the fine aerosol mass is generally dominated by sulfate and its associated water; (2) there was no statistically significant difference in average sulfate concentration between the 12-hour samples nor was there a statistically significant difference in average sulfate concentration between the 4-hour samples; (3) the aerosol is highly acidic, ranging from almost pure sulfuric acid to pure ammonium bisulfate, with an average molar ammonium ion to sulfate ratio of about 0.75 which suggests the ambient sulfate aerosol was a mixture of ammonium bisulfate and sulfuric acid; and (4) there was no statistically significant diurnal variation in particle acidity nor was there a statistically significant difference in particle acidity between the 4 hour samples.« less
Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E
2015-01-01
Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles' motion underneath plasma membranes is not purely random, but biased towards the membrane.
Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E.
2015-01-01
Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles’ arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles’ motion underneath plasma membranes is not purely random, but biased towards the membrane. PMID:26675312
Puskás, S; Bessenyei, M; Fekete, I; Hollódy, K; Clemens, B
2010-09-01
Epileptic predisposition means genetically determined, increased seizure susceptibility. Neurophysiological evaluation of this condition is still lacking. In order to investigate "pure epileptic predisposition" (without epilepsy) in this pilot study the authors prospectively recruited ten persons who displayed generalized tonic-clonic seizures precipitated by 24 or more hours of sleep deprivation but were healthy in any other respects. 21-channel EEGs were recorded in the morning, in the waking state, after a night of sufficient sleep in the interictal period. For each person, a total of 120s artifact-free EEG was processed to low resolution electromagnetic tomography (LORETA) analysis. LORETA activity (Ampers/meters squared) was computed for 2394 voxels, 19 active electrodes and 1Hz very narrow bands from 1 to 25Hz. The data were compressed into four frequency bands (delta: 0.5-4.0Hz, theta: 4.5-8.0Hz, alpha: 8.5-12.0Hz, beta: 12.5-25.0Hz) and projected onto the MRI figures of a digitized standard brain atlas. The band-related LORETA results were compared to those of ten, age- and sex-matched healthy persons using independent t-tests. p<0.01 differences were accepted as statistically significant. Statistically significant decrease of alpha activity was found in widespread, medial and lateral parts of the cortex above the level of the basal ganglia. Maximum alpha decrease and statistically significant beta decrease were found in the left precuneus. Statistically not significant differences were delta increase in the medial-basal frontal area and theta increase in the same area and in the basal temporal area. The significance of alpha decrease in the patient group remains enigmatic. beta decrease presumably reflects non-specific dysfunction of the cortex. Prefrontal delta and theta increase might have biological meaning despite the lack of statistical significance: these findings are topographically similar to those reported in idiopathic generalized epilepsy in previous investigations. Quantitative EEG characteristics of the genetically determined epilepsy predisposition were given in terms of frequency bands and anatomical distribution. Copyright 2010 Elsevier B.V. All rights reserved.
Mesoscale Fracture Analysis of Multiphase Cementitious Composites Using Peridynamics
Yaghoobi, Amin; Chorzepa, Mi G.; Kim, S. Sonny; Durham, Stephan A.
2017-01-01
Concrete is a complex heterogeneous material, and thus, it is important to develop numerical modeling methods to enhance the prediction accuracy of the fracture mechanism. In this study, a two-dimensional mesoscale model is developed using a non-ordinary state-based peridynamic (NOSBPD) method. Fracture in a concrete cube specimen subjected to pure tension is studied. The presence of heterogeneous materials consisting of coarse aggregates, interfacial transition zones, air voids and cementitious matrix is characterized as particle points in a two-dimensional mesoscale model. Coarse aggregates and voids are generated using uniform probability distributions, while a statistical study is provided to comprise the effect of random distributions of constituent materials. In obtaining the steady-state response, an incremental and iterative solver is adopted for the dynamic relaxation method. Load-displacement curves and damage patterns are compared with available experimental and finite element analysis (FEA) results. Although the proposed model uses much simpler material damage models and discretization schemes, the load-displacement curves show no difference from the FEA results. Furthermore, no mesh refinement is necessary, as fracture is inherently characterized by bond breakages. Finally, a sensitivity study is conducted to understand the effect of aggregate volume fraction and porosity on the load capacity of the proposed mesoscale model. PMID:28772518
Microbiological assay for the analysis of certain macrolides in pharmaceutical dosage forms.
Mahmoudi, A; Fourar, R E-A; Boukhechem, M S; Zarkout, S
2015-08-01
Clarithromycin (CLA) and roxithromycin (ROX) are macrolide antibiotics with an expanded spectrum of activity that are commercially available as tablets. A microbiological assay, applying the cylinder-plate method and using a strain of Micrococcus luteus ATCC 9341 as test organism, has been used and validated for the quantification of two macrolide drugs; CLA and ROX in pure and pharmaceutical formulations. The validation of the proposed method was carried out for linearity, precision, accuracy and specificity. The linear dynamic ranges were from 0.1 to 0.5μg/mL for both compounds. Logarithmic calibration curve was obtained for each macrolide (r>0.989) with statistically equal slopes varying from 3.275 to 4.038, and a percentage relative standard deviation in the range of 0.24-0.92%. Moreover, the method was applied successfully for the assay of the studied drugs in pharmaceutical tablet dosage forms. Recovery from standard addition experiments in commercial products was 94.71-96.91% regarding clarithromycin and 93.94-98.12% regarding roxithromycin, with a precision (%RSD) 1.32-2.11%. Accordingly, this microbiological assay can be used for routine quality control analysis of titled drugs in tablet formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
Farhadi, Khalil; Bochani, Shayesteh; Hatami, Mehdi; Molaei, Rahim; Pirkharrati, Hossein
2014-07-01
In this research, a new solid-phase microextraction fiber based on carbon ceramic composites with copper nanoparticles followed by gas chromatography with flame ionization detection was applied for the extraction and determination of some nitro explosive compounds in soil samples. The proposed method provides an overview of trends related to synthesis of solid-phase microextraction sorbents and their applications in preconcentration and determination of nitro explosives. The sorbents were prepared by mixing of copper nanoparticles with a ceramic composite produced by mixture of methyltrimethoxysilane, graphite, methanol, and hydrochloric acid. The prepared sorbents were coated on copper wires by dip-coating method. The prepared nanocomposites were evaluated statistically and provided better limits of detection than the pure carbon ceramic. The limit of detection of the proposed method was 0.6 μg/g with a linear response over the concentration range of 2-160 μg/g and square of correlation coefficient >0.992. The new proposed fiber has been demonstrated to be a suitable, inexpensive, and sensitive candidate for extraction of nitro explosive compounds in contaminated soil samples. The constructed fiber can be used more than 100 times without the need for surface generation. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Hanif, Muhammad Asif; Nawaz, Haq; Naz, Saima; Mukhtar, Rubina; Rashid, Nosheen; Bhatti, Ijaz Ahmad; Saleem, Muhammad
2017-07-01
In this study, Raman spectroscopy along with Principal Component Analysis (PCA) is used for the characterization of pure essential oil (pure EO) isolated from the leaves of the Hemp (Cannabis sativa L.,) as well as its different fractions obtained by fractional distillation process. Raman spectra of pure Hemp essential oil and its different fractions show characteristic key bands of main volatile terpenes and terpenoids, which significantly differentiate them from each other. These bands provide information about the chemical composition of sample under investigation and hence can be used as Raman spectral markers for the qualitative monitoring of the pure EO and different fractions containing different active compounds. PCA differentiates the Raman spectral data into different clusters and loadings of the PCA further confirm the biological origin of the different fractions of the essential oil.
NASA Astrophysics Data System (ADS)
Aggarwal, M. D.; Wang, W. S.; Tambwe, M.
1993-03-01
Pure, Cd2+ and Nd3+-doped benzil C6H5COCOC6H5 have been grown from melt using the Czochralski and modified Bridgman-Stockbarger methods. Angle-tuned second harmonic generation of pure benzil from Nd:YAG laser radiation of λ = 1.06 μm with a conversion efficiency η = I2w/Iw = 0.4% has been demonstrated. We have used a Nd:YAG pulse laser to measure the radiation damage threshold as 15.9 MW/cm2 (c-axis) and 23.9 MW/cm2 (a-axis) under the conditions that laser pulse width is 10 ns. Under the same conditions, the conversion efficiency of Nd3+ and Cd2+-doped benzil, η= I2w/Iw = 1.1%, has been demonstrated. The radiation threshold is higher than for pure benzil crystals.
NASA Astrophysics Data System (ADS)
Turki, Imen; Laignel, Benoit; Kakeh, Nabil; Chevalier, Laetitia; Costa, Stephane
2015-04-01
This research is carried out in the framework of the program Surface Water and Ocean Topography (SWOT) which is a partnership between NASA and CNES. Here, a new hybrid model is implemented for filling gaps and forecasting the hourly sea level variability by combining classical harmonic analyses to high statistical methods to reproduce the deterministic and stochastic processes, respectively. After simulating the mean trend sea level and astronomical tides, the nontidal residual surges are investigated using an autoregressive moving average (ARMA) methods by two ways: (1) applying a purely statistical approach and (2) introducing the SLP in ARMA as a main physical process driving the residual sea level. The new hybrid model is applied to the western Atlantic sea and the eastern English Channel. Using ARMA model and considering the SLP, results show that the hourly sea level observations of gauges with are well reproduced with a root mean square error (RMSE) ranging between 4.5 and 7 cm for 1 to 30 days of gaps and an explained variance more than 80 %. For larger gaps of months, the RMSE reaches 9 cm. The negative and the positive extreme values of sea levels are also well reproduced with a mean explained variance between 70 and 85 %. The statistical behavior of 1-year modeled residual components shows good agreements with observations. The frequency analysis using the discrete wavelet transform illustrate strong correlations between observed and modeled energy spectrum and the bands of variability. Accordingly, the proposed model presents a coherent, simple, and easy tool to estimate the total sea level at timescales from days to months. The ARMA model seems to be more promising for filling gaps and estimating the sea level at larger scales of years by introducing more physical processes driving its stochastic variability.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Ameyapoh, Yaovi; de Souza, Comlan; Traore, Alfred S
2008-09-01
Microbiological and physicochemical qualities of a tomato (Lycopersicon esculentum) puree production line (ripe tomato, washing, cutting, pounding, bleaching, straining, bottling and pasteurization) and its preservation in Togo, West Africa, were studied using the HACCP method. Samples generated during the steps described previously were analyzed by determining sensory, chemical and microbiological characteristics. Samples were analyzed using MPN for coliform populations and plate count methodology for other bacteria. The microorganisms involved in spoilage of the opened products were moulds of genera Penicillium, Aspergillus, Fusarium, Geotrichum, Mucor and gram-positive Bacillus bacteria. The preserved tomato puree exhibited a pH value of 4.3, 90% water content, 0.98 water activity (aw) and an average ascorbic acid level of 27.3mg/100g. Results showed that the critical control point (CCP) of this tomato puree processing line is the pasteurization stage. The analysis of selected microbiological and physicochemical parameters during the preservation of bottled tomato puree indicated that this product was stable over 22 months at 29 degrees C. But the stability of the opened product stored at 29 degrees C did not exceed two months.
The L sub 1 finite element method for pure convection problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1991-01-01
The least squares (L sub 2) finite element method is introduced for 2-D steady state pure convection problems with smooth solutions. It is proven that the L sub 2 method has the same stability estimate as the original equation, i.e., the L sub 2 method has better control of the streamline derivative. Numerical convergence rates are given to show that the L sub 2 method is almost optimal. This L sub 2 method was then used as a framework to develop an iteratively reweighted L sub 2 finite element method to obtain a least absolute residual (L sub 1) solution for problems with discontinuous solutions. This L sub 1 finite element method produces a nonoscillatory, nondiffusive and highly accurate numerical solution that has a sharp discontinuity in one element on both coarse and fine meshes. A robust reweighting strategy was also devised to obtain the L sub 1 solution in a few iterations. A number of examples solved by using triangle and bilinear elements are presented.
Underprotection of unpredictable statistical lives compared to predictable ones
Evans, Nicholas G.; Cotton-Barratt, Owen
2016-01-01
Existing ethical discussion considers the differences in care for identified versus statistical lives. However there has been little attention to the different degrees of care that are taken for different kinds of statistical lives. Here we argue that for a given number of statistical lives at stake, there will sometimes be different, and usually greater care taken to protect predictable statistical lives, in which the number of lives that will be lost can be predicted fairly accurately, than for unpredictable statistical lives, where the lives are at stake because of a low-probability event, such that most likely no one will be affected by the decision but with low probability some lives will be at stake. One reason for this difference is the statistical challenge of estimating low probabilities, and in particular the tendency of common approaches to underestimate these probabilities. Another is the existence of rational incentives to treat unpredictable risks as if the probabilities were lower than they are. Some of these factors apply outside the pure economic context, to institutions, individuals, and governments. We argue that there is no ethical reason to treat unpredictable statistical lives differently from predictable statistical lives. Moreover, lives that are unpredictable from the perspective of an individual agent may become predictable when aggregated to the level of a societal decision. Underprotection of unpredictable statistical lives is a form of market failure that may need to be corrected by altering regulation, introducing compulsory liability insurance, or other social policies. PMID:27393181
Yao, Hua; Ma, Jinqi
2018-01-01
The present paper investigates the enhancement of the therapeutic effect of Paclitaxel (a potent anticancer drug) by increasing its cellular uptake in the cancerous cells with subsequent reduction in its cytotoxic effects. To fulfill these goals the Paclitaxel (PTX)-Biotinylated PAMAM dendrimer complexes were prepared using biotinylation method. The primary parameter of Biotinylated PAMAM with a terminal HN 2 group - the degree of biotinylation - was evaluated using HABA assay. The basic integrity of the complex was studied using DSC. The Drug Loading (DL) and Drug Release (DR) parameters of Biotinylated PAMAM dendrimer-PTX complexes were also examined. Cellular uptake study was performed in OVCAR-3 and HEK293T cells using fluorescence technique. The statistical analysis was also performed to support the experimental data. The results obtained from HABA assay showed the complete biotinylation of PAMAM dendrimer. DSC study confirmed the integrity of the complex as compared with pure drug, biotinylated complex and their physical mixture. Batch 9 showed the highest DL (12.09%) and DR (70%) for 72 h as compared to different concentrations of drug and biotinylated complex. The OVCAR-3 (cancerous) cells were characterized by more intensive cellular uptake of the complexes than HEK293T (normal) cells. The obtained experimental results were supported by the statistical data. The results obtained from both experimental and statistical evaluation confirmed that the biotinylated PAMAM NH 2 dendrimer-PTX complex not only displays increased cellular uptake but has also enhanced release up to 72 h with the reduction in cytotoxicity.
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
Dynamic stability derivatives are evaluated on the basis of rolling-flow, curved-flow and snaking tests. Attention is given to the hardware associated with curved-flow, rolling-flow and oscillatory pure-yawing wind-tunnel tests. It is found that the snaking technique, when combined with linear- and forced-oscillation methods, yields an important method for evaluating beta derivatives for current configurations at high angles of attack. Since the rolling flow model is fixed during testing, forced oscillations may be imparted to the model, permitting the measurement of damping and cross-derivatives. These results, when coupled with basic rolling-flow or rotary-balance data, yield a highly accurate mathematical model for studies of incipient spin and spin entry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, Henry P.
2011-05-10
The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determinedmore » by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.« less
Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P
2007-08-29
The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.
Experimental Study of Water Transport through Hydrophilic Nanochannels
NASA Astrophysics Data System (ADS)
Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua
2015-11-01
In this paper, we investigate one of the fundamental aspects of Nanofluidics, which is the experimental study of water transport through nanoscale hydrophilic conduits. A new method based on spontaneous filling and a novel hybrid nanochannel design is developed to measure the pure mass flow resistance of single nanofluidic channels/tubes. This method does not require any pressure and flow sensors and also does not rely on any theoretical estimations, holding the potential to be standards for nanofluidic flow characterization. We have used this method to measure the pure mass flow resistance of single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our experimental results quantify the increased mass flow resistance as a function of nanochannel height, showing a 45% increase for a 7nm channel compared with classical hydrodynamics, and suggest that the increased resistance is possibly due to formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. It has been further shown that this method can reliably measure a wide range of pure mass flow resistances of nanoscale conduits, and thus is promising for advancing studies of liquid transport in hydrophobic graphene nanochannels, CNTs, as well as nanoporous media. The work is supported by the American Chemical Society Petroleum Research Fund (ACS PRF # 54118-DNI7) and the Faculty Startup Fund (Boston University, USA).
A multimembership catalogue for 1876 open clusters using UCAC4 data
NASA Astrophysics Data System (ADS)
Sampedro, L.; Dias, W. S.; Alfaro, E. J.; Monteiro, H.; Molino, A.
2017-10-01
The main objective of this work is to determine the cluster members of 1876 open clusters, using positions and proper motions of the astrometric fourth United States Naval Observatory (USNO) CCD Astrograph Catalog (UCAC4). For this purpose, we apply three different methods, all based on a Bayesian approach, but with different formulations: a purely parametric method, another completely non-parametric algorithm and a third, recently developed by Sampedro & Alfaro, using both formulations at different steps of the whole process. The first and second statistical moments of the members' phase-space subspace, obtained after applying the three methods, are compared for every cluster. Although, on average, the three methods yield similar results, there are also specific differences between them, as well as for some particular clusters. The comparison with other published catalogues shows good agreement. We have also estimated, for the first time, the mean proper motion for a sample of 18 clusters. The results are organized in a single catalogue formed by two main files, one with the most relevant information for each cluster, partially including that in UCAC4, and the other showing the individual membership probabilities for each star in the cluster area. The final catalogue, with an interface design that enables an easy interaction with the user, is available in electronic format at the Stellar Systems Group (SSG-IAA) web site (http://ssg.iaa.es/en/content/sampedro-cluster-catalog).
Bradley, Jennifer; West-Sadler, Sarah; Foster, Emma; Sommerville, Jill; Allen, Rachel; Stephen, Alison M; Adamson, Ashley J
2018-01-01
The Diet and Nutrition Survey of Infants and Young Children (DNSIYC) was carried out in 2011 to assess the nutrient intakes of 4 to 18 month old infants in the UK. Prior to the main stage of DNSIYC, pilot work was undertaken to determine the impact of using graduated utensils to estimate portion sizes. The aims were to assess whether the provision of graduated utensils altered either the foods given to infants or the amount consumed by comparing estimated intakes to weighed intakes. Parents completed two 4-day food diaries over a two week period; an estimated diary using graduated utensils and a weighed diary. Two estimated diary formats were tested; half the participants completed estimated diaries in which they recorded the amount of food/drink served and the amount left over, and the other half recorded the amount of food/drink consumed only. Median daily food intake for the estimated and the weighed method were similar; 980g and 928g respectively. There was a small (6.6%) but statistically significant difference in energy intake reported by the estimated and the weighed method; 3189kJ and 2978kJ respectively. There were no statistically significant differences between estimated intakes from the served and left over diaries and weighed intakes (p>0.05). Estimated intakes from the amount consumed diaries were significantly different to weighed intakes (food weight (g) p = 0.02; energy (kJ) p = 0.01). There were no differences in intakes of amorphous (foods which take the shape of the container, e.g. pureed foods, porridge) and discrete food items (individual pieces of food e.g. biscuits, rice cakes) between the two methods. The results suggest that the household measures approach to reporting portion size, with the combined use of the graduated utensils, and recording the amount served and the amount left over in the food diaries, may provide a feasible alternative to weighed intakes.
West-Sadler, Sarah; Foster, Emma; Sommerville, Jill; Allen, Rachel; Stephen, Alison M.; Adamson, Ashley J.
2018-01-01
The Diet and Nutrition Survey of Infants and Young Children (DNSIYC) was carried out in 2011 to assess the nutrient intakes of 4 to 18 month old infants in the UK. Prior to the main stage of DNSIYC, pilot work was undertaken to determine the impact of using graduated utensils to estimate portion sizes. The aims were to assess whether the provision of graduated utensils altered either the foods given to infants or the amount consumed by comparing estimated intakes to weighed intakes. Parents completed two 4-day food diaries over a two week period; an estimated diary using graduated utensils and a weighed diary. Two estimated diary formats were tested; half the participants completed estimated diaries in which they recorded the amount of food/drink served and the amount left over, and the other half recorded the amount of food/drink consumed only. Median daily food intake for the estimated and the weighed method were similar; 980g and 928g respectively. There was a small (6.6%) but statistically significant difference in energy intake reported by the estimated and the weighed method; 3189kJ and 2978kJ respectively. There were no statistically significant differences between estimated intakes from the served and left over diaries and weighed intakes (p>0.05). Estimated intakes from the amount consumed diaries were significantly different to weighed intakes (food weight (g) p = 0.02; energy (kJ) p = 0.01). There were no differences in intakes of amorphous (foods which take the shape of the container, e.g. pureed foods, porridge) and discrete food items (individual pieces of food e.g. biscuits, rice cakes) between the two methods. The results suggest that the household measures approach to reporting portion size, with the combined use of the graduated utensils, and recording the amount served and the amount left over in the food diaries, may provide a feasible alternative to weighed intakes. PMID:29879140
Revival of pure titanium for dynamically loaded porous implants using additive manufacturing.
Wauthle, Ruben; Ahmadi, Seyed Mohammad; Amin Yavari, Saber; Mulier, Michiel; Zadpoor, Amir Abbas; Weinans, Harrie; Van Humbeeck, Jan; Kruth, Jean-Pierre; Schrooten, Jan
2015-09-01
Additive manufacturing techniques are getting more and more established as reliable methods for producing porous metal implants thanks to the almost full geometrical and mechanical control of the designed porous biomaterial. Today, Ti6Al4V ELI is still the most widely used material for porous implants, and none or little interest goes to pure titanium for use in orthopedic or load-bearing implants. Given the special mechanical behavior of cellular structures and the material properties inherent to the additive manufacturing of metals, the aim of this study is to investigate the properties of selective laser melted pure unalloyed titanium porous structures. Therefore, the static and dynamic compressive properties of pure titanium structures are determined and compared to previously reported results for identical structures made from Ti6Al4V ELI and tantalum. The results show that porous Ti6Al4V ELI still remains the strongest material for statically loaded applications, whereas pure titanium has a mechanical behavior similar to tantalum and is the material of choice for cyclically loaded porous implants. These findings are considered to be important for future implant developments since it announces a potential revival of the use of pure titanium for additively manufactured porous implants. Copyright © 2015 Elsevier B.V. All rights reserved.
Collective behavior of networks with linear (VLSI) integrate-and-fire neurons.
Fusi, S; Mattia, M
1999-04-01
We analyze in detail the statistical properties of the spike emission process of a canonical integrate-and-fire neuron, with a linear integrator and a lower bound for the depolarization, as often used in VLSI implementations (Mead, 1989). The spike statistics of such neurons appear to be qualitatively similar to conventional (exponential) integrate-and-fire neurons, which exhibit a wide variety of characteristics observed in cortical recordings. We also show that, contrary to current opinion, the dynamics of a network composed of such neurons has two stable fixed points, even in the purely excitatory network, corresponding to two different states of reverberating activity. The analytical results are compared with numerical simulations and are found to be in good agreement.
Confinement, holonomy, and correlated instanton-dyon ensemble: SU(2) Yang-Mills theory
NASA Astrophysics Data System (ADS)
Lopez-Ruiz, Miguel Angel; Jiang, Yin; Liao, Jinfeng
2018-03-01
The mechanism of confinement in Yang-Mills theories remains a challenge to our understanding of nonperturbative gauge dynamics. While it is widely perceived that confinement may arise from chromomagnetically charged gauge configurations with nontrivial topology, it is not clear what types of configurations could do that and how, in pure Yang-Mills and QCD-like (nonsupersymmetric) theories. Recently, a promising approach has emerged, based on statistical ensembles of dyons/anti-dyons that are constituents of instanton/anti-instanton solutions with nontrivial holonomy where the holonomy plays a vital role as an effective "Higgsing" mechanism. We report a thorough numerical investigation of the confinement dynamics in S U (2 ) Yang-Mills theory by constructing such a statistical ensemble of correlated instanton-dyons.
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
Role of mathematics in cancer research: attitudes and training of Japanese mathematicians.
Kudô, A
1979-10-01
An extensive survey of attitude towards scientific information of scientists in Japan was conducted in Japan. It was published in a technical report, and this survey is reviewed in this paper, with the hope that this will furnish findings important in working out the plan for promoting exploitation of mathematical talent in biomedical research. Findings are concordant with the impression of foreign visitors: (1) pure mathematicians tend to concentrate on mathematics only; (2) applied mathematics and statistics are heavily oriented toward industry; (3) mathematicians and pharmacologists are very different in their attitudes to scientific information. Based on the personal experience of the author, difficulties to be circumvented in utilizing aptitudes for mathematics and/or statistics in biomedical research are discussed.
Role of mathematics in cancer research: attitudes and training of Japanese mathematicians.
Kudô, A
1979-01-01
An extensive survey of attitude towards scientific information of scientists in Japan was conducted in Japan. It was published in a technical report, and this survey is reviewed in this paper, with the hope that this will furnish findings important in working out the plan for promoting exploitation of mathematical talent in biomedical research. Findings are concordant with the impression of foreign visitors: (1) pure mathematicians tend to concentrate on mathematics only; (2) applied mathematics and statistics are heavily oriented toward industry; (3) mathematicians and pharmacologists are very different in their attitudes to scientific information. Based on the personal experience of the author, difficulties to be circumvented in utilizing aptitudes for mathematics and/or statistics in biomedical research are discussed. PMID:540605
Synthesis and characterization of Au incorporated Alq3 nanowires
NASA Astrophysics Data System (ADS)
Khan, Mohammad Bilal; Ahmad, Sultan; Parwaz, M.; Rahul, Khan, Zishan H.
2018-05-01
We report the synthesis and characterization of pure and Au incorporated Alq3 nanowires. These nanowires are synthesized using thermal vapor transport method. The luminescence intensity of Au incorporated Alq3 nanowires are recorded to be higher than that of pure Alq3 nanowires, which is found to increase with the increase in Au concentration. Fluorescence quenching is also observed when Au concentration is increased beyond the certain limit.
Baker, John A; Hirst, Jonathan D
2014-01-01
Traditionally, electrostatic interactions are modelled using Ewald techniques, which provide a good approximation, but are poorly suited to GPU architectures. We use the GPU versions of the LAMMPS MD package to implement and assess the Wolf summation method. We compute transport and structural properties of pure carbon dioxide and mixtures of carbon dioxide with either methane or difluoromethane. The diffusion of pure carbon dioxide is indistinguishable when using the Wolf summation method instead of PPPM on GPUs. The optimum value of the potential damping parameter, α, is 0.075. We observe a decrease in accuracy when the system polarity increases, yet the method is robust for mildly polar systems. We anticipate the method can be used for a number of techniques, and applied to a variety of systems. Substitution of PPPM can yield a two-fold decrease in the wall-clock time.
NASA Astrophysics Data System (ADS)
Pedreira, W. R.; Sarkis, J. E. S.; da Silva Queiroz, C. A.; Rodrigues, C.; Tomiyoshi, I. A.; Abrão, A.
2003-02-01
Recently rare-earth elements (REE) have received much attention in fields of geochemistry and industry. Rapid and accurate determinations of them are increasingly required as industrial demands expand. Sector field inductively coupled plasma mass spectrometry (ICP-SFMS) with high-performance liquid chromatography (HPLC) has been applied to the determination of REE. HR ICP-MS was used as an element-selective detector for HPLC in highly pure materials. The separation of REE with HPLC helped to avoid erroneous analytical results due to spectral interferences. Sixteen elements (Sc, Y and 14 lanthanides) were determined selectively with the HPLC/ICP-SFMS system using a concentration gradient methods. The detection limits with the HPLC/ICP-SFMS system were about 0.5-10 pg mL-1. The percentage recovery ranged from 90% to 100% for different REE. The %RSD of the methods varying between 2.5% and 4.5% for a set of five (n=5) replicates was found for the IPEN's material and for the certificate reference sample. Determination of trace REEs in two highly pure neodymium oxides samples (IPEN and Johnson Matthey Company) were performed. In short, the IPEN's materials which are highly pure (>99.9%) were successfully analyzed without spectral interferences.
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
Biodegradable polymer for sealing porous PEO layer on pure magnesium: An in vitro degradation study
NASA Astrophysics Data System (ADS)
Alabbasi, Alyaa; Mehjabeen, Afrin; Kannan, M. Bobby; Ye, Qingsong; Blawert, Carsten
2014-05-01
An attempt was made to seal the porous silicate-based plasma electrolytic oxidation (PEO) layer on pure magnesium (Mg) with a biodegradable polymer, poly(L-lactide) (PLLA), to delay the localized degradation of magnesium-based implants in body fluid for better in-service mechanical integrity. Firstly, a silicate-based PEO coating on pure magnesium was performed using a pulsed constant current method. In order to seal the pores in the PEO layer, PLLA was coated using a two-step spin coating method. The performance of the PEO-PLLA Mg was evaluated using electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization. The EIS results showed that the polarization resistance (Rp) of the PEO-PLLA Mg was close to two orders of magnitude higher than that of the PEO Mg. While the corrosion current density (icorr) of the pure Mg was reduced by 65% with the PEO coating, the PEO-PLLA coating reduced the icorr by almost 100%. As expected, the Rp of the PEO-PLLA Mg decreased with increase in exposure time. However, it was noted that the Rp of the PEO-PLLA Mg even after 100 h was six times higher than that of the PEO Mg after 48 h exposure, and did not show any visible localized attack.
Statistical Correction of Air Temperature Forecasts for City and Road Weather Applications
NASA Astrophysics Data System (ADS)
Mahura, Alexander; Petersen, Claus; Sass, Bent; Gilet, Nicolas
2014-05-01
The method for statistical correction of air /road surface temperatures forecasts was developed based on analysis of long-term time-series of meteorological observations and forecasts (from HIgh Resolution Limited Area Model & Road Conditions Model; 3 km horizontal resolution). It has been tested for May-Aug 2012 & Oct 2012 - Mar 2013, respectively. The developed method is based mostly on forecasted meteorological parameters with a minimal inclusion of observations (covering only a pre-history period). Although the st iteration correction is based taking into account relevant temperature observations, but the further adjustment of air and road temperature forecasts is based purely on forecasted meteorological parameters. The method is model independent, e.g. it can be applied for temperature correction with other types of models having different horizontal resolutions. It is relatively fast due to application of the singular value decomposition method for matrix solution to find coefficients. Moreover, there is always a possibility for additional improvement due to extra tuning of the temperature forecasts for some locations (stations), and in particular, where for example, the MAEs are generally higher compared with others (see Gilet et al., 2014). For the city weather applications, new operationalized procedure for statistical correction of the air temperature forecasts has been elaborated and implemented for the HIRLAM-SKA model runs at 00, 06, 12, and 18 UTCs covering forecast lengths up to 48 hours. The procedure includes segments for extraction of observations and forecast data, assigning these to forecast lengths, statistical correction of temperature, one-&multi-days statistical evaluation of model performance, decision-making on using corrections by stations, interpolation, visualisation and storage/backup. Pre-operational air temperature correction runs were performed for the mainland Denmark since mid-April 2013 and shown good results. Tests also showed that the CPU time required for the operational procedure is relatively short (less than 15 minutes including a large time spent for interpolation). These also showed that in order to start correction of forecasts there is no need to have a long-term pre-historical data (containing forecasts and observations) and, at least, a couple of weeks will be sufficient when a new observational station is included and added to the forecast point. Note for the road weather application, the operationalization of the statistical correction of the road surface temperature forecasts (for the RWM system daily hourly runs covering forecast length up to 5 hours ahead) for the Danish road network (for about 400 road stations) was also implemented, and it is running in a test mode since Sep 2013. The method can also be applied for correction of the dew point temperature and wind speed (as a part of observations/ forecasts at synoptical stations), where these both meteorological parameters are parts of the proposed system of equations. The evaluation of the method performance for improvement of the wind speed forecasts is planned as well, with considering possibilities for the wind direction improvements (which is more complex due to multi-modal types of such data distribution). The method worked for the entire domain of mainland Denmark (tested for 60 synoptical and 395 road stations), and hence, it can be also applied for any geographical point within this domain, as through interpolation to about 100 cities' locations (for Danish national byvejr forecasts). Moreover, we can assume that the same method can be used in other geographical areas. The evaluation for other domains (with a focus on Greenland and Nordic countries) is planned. In addition, a similar approach might be also tested for statistical correction of concentrations of chemical species, but such approach will require additional elaboration and evaluation.
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Otoacoustic Emissions before and after Listening to Music on a Personal Player
Trzaskowski, Bartosz; Jędrzejczak, W. Wiktor; Piłka, Edyta; Cieślicka, Magdalena; Skarżyński, Henryk
2014-01-01
Background The problem of the potential impact of personal music players on the auditory system remains an open question. The purpose of the present study was to investigate, by means of otoacoustic emissions (OAEs), whether listening to music on a personal player affected auditory function. Material/Methods A group of 20 normally hearing adults was exposed to music played on a personal player. Transient evoked OAEs (TEOAEs) and distortion product OAEs (DPOAEs), as well as pure tone audiometry (PTA) thresholds, were tested at 3 stages: before, immediately after, and the next day following 30 min of exposure to music at 86.6 dBA. Results We found no statistically significant changes in OAE parameters or PTA thresholds due to listening to the music. Conclusions These results suggest that exposure to music at levels similar to those used in our study does not disturb cochlear function in a way that can be detected by means of PTA, TEOAE, or DPOAE tests. PMID:25116920
Seidel, Kathrin; Kahl, Johannes; Paoletti, Flavio; Birlouez, Ines; Busscher, Nicolaas; Kretzschmar, Ursula; Särkkä-Tirkkonen, Marjo; Seljåsen, Randi; Sinesio, Fiorella; Torp, Torfinn; Baiamonte, Irene
2015-02-01
The market for processed food is rapidly growing. The industry needs methods for "processing with care" leading to high quality products in order to meet consumers' expectations. Processing influences the quality of the finished product through various factors. In carrot baby food, these are the raw material, the pre-processing and storage treatments as well as the processing conditions. In this study, a quality assessment was performed on baby food made from different pre-processed raw materials. The experiments were carried out under industrial conditions using fresh, frozen and stored organic carrots as raw material. Statistically significant differences were found for sensory attributes among the three autoclaved puree samples (e.g. overall odour F = 90.72, p < 0.001). Samples processed from frozen carrots show increased moisture content and decrease of several chemical constituents. Biocrystallization identified changes between replications of the cooking. Pre-treatment of raw material has a significant influence on the final quality of the baby food.
NASA Astrophysics Data System (ADS)
Mu, X. N.; Zhang, H. M.; Cai, H. N.; Fan, Q. B.; Wu, Y.; Fu, Z. J.; Wang, Q. X.
2017-05-01
This study proposed an in-situ reactive method that uses graphene as a reinforcement to fabricate titanium metal matrix composites (TiMMCs) through powder metallurgy processing route. The volume fraction of graphene nanoplatelets was 1.8%vol, and the pure titanium was used as a matrix. The Archimedes density, hardness, microstructure and mechanical properties of specimens were compared under different ball milling times (20 min and 2.5 h) and hot pressing temperatures (900°C, 1150°C, and 1300°C,). The ultimate tensile strength of 630 MPa, which demonstrated a 27.3% increase compared with pure Ti, was achieved under a ball milling time of 20 min. Elongation increased with increasing temperature. When the ball milling time and hot pressing temperature were increased to 2.5 h and 1300 °C, respectively, the ultimate tensile strength of the composites reached 750 MPa, showing an increase of 51.5% compared with pure Ti.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganesh, P.; Kim, Jeongnim; Park, Changwon
2014-11-03
In highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Moreover, the highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based onmore » point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. Our results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.« less
Pure F-actin networks are distorted and branched by steps in the critical-point drying method.
Resch, Guenter P; Goldie, Kenneth N; Hoenger, Andreas; Small, J Victor
2002-03-01
Elucidation of the ultrastructural organization of actin networks is crucial for understanding the molecular mechanisms underlying actin-based motility. Results obtained from cytoskeletons and actin comets prepared by the critical-point procedure, followed by rotary shadowing, support recent models incorporating actin filament branching as a main feature of lamellipodia and pathogen propulsion. Since actin branches were not evident in earlier images obtained by negative staining, we explored how these differences arise. Accordingly, we have followed the structural fate of dense networks of pure actin filaments subjected to steps of the critical-point drying protocol. The filament networks have been visualized in parallel by both cryo-electron microscopy and negative staining. Our results demonstrate the selective creation of branches and other artificial structures in pure F-actin networks by the critical-point procedure and challenge the reliability of this method for preserving the detailed organization of actin assemblies that drive motility. (c) 2002 Elsevier Science (USA).
Otsuka, Keigo; Inoue, Taiki; Maeda, Etsuo; Kometani, Reo; Chiashi, Shohei; Maruyama, Shigeo
2017-11-28
Ballistic transport and sub-10 nm channel lengths have been achieved in transistors containing one single-walled carbon nanotube (SWNT). To fill the gap between single-tube transistors and high-performance logic circuits for the replacement of silicon, large-area, high-density, and purely semiconducting (s-) SWNT arrays are highly desired. Here we demonstrate the fabrication of multiple transistors along a purely semiconducting SWNT array via an on-chip purification method. Water- and polymer-assisted burning from site-controlled nanogaps is developed for the reliable full-length removal of metallic SWNTs with the damage to s-SWNTs minimized even in high-density arrays. All the transistors with various channel lengths show large on-state current and excellent switching behavior in the off-state. Since our method potentially provides pure s-SWNT arrays over a large area with negligible damage, numerous transistors with arbitrary dimensions could be fabricated using a conventional semiconductor process, leading to SWNT-based logic, high-speed communication, and other next-generation electronic devices.
Spatial distribution of the gamma-ray bursts at very high redshift
NASA Astrophysics Data System (ADS)
Mészáros, Attila
2018-05-01
The author - with his collaborators - already in years 1995-96 have shown - purely from the analyses of the observations - that the gamma-ray bursts (GRBs) can be till redshift 20. Since that time several other statistical studies of the spatial distribution of GRBs were provided. Remarkable conclusions concerning the star-formation rate and the validity of the cosmological principle were obtained about the regions of the cosmic dawn. In this contribution these efforts are surveyed.
Statistics of Radial Ship Extent as Seen by a Seeker
2014-06-01
Auckland in pure and applied mathematics and physics, and a Master of Science in physics from the same university with a thesis in applied accelerator...does not demand contributions from two angle bins to one extent bin, unlike the rectangle; this is a very big advantage of the ellipse model. However...waveform that mimics the full length of a ship. This allows more economical use to be made of available false-target generation resources. I wish to
The Role of IQGAP1 in Breast Carcinoma
2012-10-01
and"-tubulin expression was measured as described above. Statistical Analysis —All experiments were repeated inde- pendently at least three times...IQGAP1 Binds HER2—In vitro analysis with pure proteins was used to examine a possible interaction between IQGAP1 and HER2. GST alone or GST-HER2 was...incubated with puri- fied IQGAP1, and complexes were isolated with glutathione- Sepharose. Analysis by Western blotting reveals that IQGAP1 bindsHER2
NASA Astrophysics Data System (ADS)
Gupta, Jhalak; Ahmed, Arham S.
2018-05-01
The pure and Cr doped nickel oxide (NiO) nanoparticles have been synthesized by cost effective co-precipitation method having nickel nitrate as initial precursor. The synthesized samples were characterized by X-Ray diffraction (XRD), UV-Visible Spectroscopy(UV-Vis) and LCR meter for structural, optical and dielectric properties respectively. The crystallite size of pure nickel oxide nanoparticles characterized by XRD using Debye Scherer's formula was found to be 21.7nm and the same decreases on increasing Cr concentration whereas optical and dielectric properties were analyzed by UV-Vis and LCR meter respectively. The energy band gaps were determined by UV-Vis using Tauc relation.
NASA Technical Reports Server (NTRS)
Kobayashi, H.
1978-01-01
Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.
[Study of the hearing of rock and roll musicians].
Maia, Juliana Rollo Fernandes; Russo, Ieda Chaves Pacheco
2008-01-01
rock and roll has as one of its main characteristics the excessive sound pressure levels. Several studies have demonstrated that the sound levels of rock concerts can range from 100 to 115dB (A), with peak levels of 150dB (A). to study the hearing of rock and roll musicians, analyzing the results of the audiological evaluation and verifying the influence of time of exposure to amplified music. a questionnaire was answered by 23 rock and roll musicians (46 ears) who were also evaluated by means of pure tone audiometry, immitance audiometry and transient/distortion product evoked otoacoustic emissions (OAET and OAEPD). regarding the time of exposure to music, values close to the limit of acceptance (tending to be significants) were found in the frequencies of 0.5 and 6kHz, in the pure tone audiometry. A statistically significant difference was also found in the OAET test in the frequency of 2kHz and also in the frequencies of 0.75, 1, 4 and 6kHz in the OAEPD test. the results indicate that although hearing loss was not found in the studied population, alteration in the register of the OAE already exists, suggesting alteration of the cochlear function. Regarding time of exposure, the results indicate that musicians with more than 10 years of practice present statistically significant differences when compared to those with less time of exposure.
Does stapes surgery improve tinnitus in patients with otosclerosis?
Ismi, Onur; Erdogan, Osman; Yesilova, Mesut; Ozcan, Cengiz; Ovla, Didem; Gorur, Kemal
Otosclerosis (OS) is the primary disease of the human temporal bone characterized by conductive hearing loss and tinnitus. The exact pathogenesis of tinnitus in otosclerosis patients is not known and factors affecting the tinnitus outcome in otosclerosis patients are still controversial. To find the effect of stapedotomy on tinnitus for otosclerosis patients. Fifty-six otosclerosis patients with preoperative tinnitus were enrolled to the study. Pure tone average Air-Bone Gap values, preoperative tinnitus pitch, Air-Bone Gap closure at tinnitus frequencies were evaluated for their effect on the postoperative outcome. Low pitch tinnitus had more favorable outcome compared to high pitch tinnitus (p=0.002). Postoperative average pure tone thresholds Air-Bone Gap values were not related to the postoperative tinnitus (p=0.213). There was no statistically significant difference between postoperative Air-Bone Gap closure at tinnitus frequency and improvement of high pitch tinnitus (p=0.427). There was a statistically significant difference between Air-Bone Gap improvement in tinnitus frequency and low pitch tinnitus recovery (p=0.026). Low pitch tinnitus is more likely to be resolved after stapedotomy for patients with otosclerosis. High pitch tinnitus may not resolve even after closure of the Air-Bone Gap at tinnitus frequencies. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Evaluation of anodic behavior of commercially pure titanium in tungsten inert gas and laser welds.
Orsi, Iara Augusta; Raimundo, Larica B; Bezzon, Osvaldo Luiz; Nóbilo, Mauro Antonio de Arruda; Kuri, Sebastião E; Rovere, Carlos Alberto D; Pagnano, Valeria Oliveira
2011-12-01
This study evaluated the resistance to corrosion in welds made with Tungsten Inert Gas (TIG) in specimens made of commercially pure titanium (cp Ti) in comparison with laser welds. A total of 15 circular specimens (10-mm diameter, 2-mm thick) were fabricated and divided into two groups: control group-cp Ti specimens (n = 5); experimental group-cp Ti specimens welded with TIG (n = 5) and with laser (n = 5). They were polished mechanically, washed with isopropyl alcohol, and dried with a drier. In the anodic potentiodynamic polarization assay, measurements were taken using a potentiostat/galvanostat in addition to CorrWare software for data acquisition and CorrView for data visualization and treatment. Three curves were made for each working electrode. Corrosion potential values were statistically analyzed by the Student's t-test. Statistical analysis showed that corrosion potentials and passive current densities of specimens welded with TIG are similar to those of the control group, and had lower values than laser welding. TIG welding provided higher resistance to corrosion than laser welding. Control specimens welded with TIG were more resistant to local corrosion initiation and propagation than those with laser welding, indicating a higher rate of formation and growth of passive film thickness on the surfaces of these alloys than on specimens welded with laser, making it more difficult for corrosion to occur. © 2011 by the American College of Prosthodontists.
Tulsani, S G; Chikkanarasaiah, N; Bethur, S
2014-01-01
Biopure MTAD™, a new root canal irrigant has shown promising results against the most common resistant microorganism, E. faecalis, in permanent teeth. However, there is lack of studies comparing its antimicrobial effectiveness with NaOCl in primary teeth. The purpose of this study was to compare the in vivo antimicrobial efficacy of NaOCl 2.5% and Biopure MTAD™ against E. faecalis in primary teeth. Forty non vital single rooted primary maxillary anterior teeth of children aged 4-8 years, were irrigated either with NaOCl 2.5% (n=15), Biopure MTAD™ (n=15) and 0.9% Saline (n=10, control group). Paper point samples were collected at baseline (S1) and after chemomechanical preparation (S2) during the pulpectomy procedure. The presence of E. faecalis in S1 & S2 was evaluated using Real time Polymerase Chain Reaction. Statistical significant difference was found in the antimicrobial efficacy of NaOCl 2.5 % and BioPure MTAD™ when compared to saline (p>0.05). However, no statistical significant difference was found between the efficacies of both the irrigants. NaOCl 2.5% and BioPure MTAD™, both irrigants are equally efficient against E. faecalis in necrotic primary anterior teeth. MTAD is a promising irrigant, however clinical studies are required to establish it as ideal root canal irrigant in clinical practice.
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
NASA Astrophysics Data System (ADS)
Genestreti, K. J.; Fuselier, S. A.; Goldstein, J.; Nagai, T.; Eastwood, J. P.
2014-12-01
A statistical characterization of the location and rate of occurrence of magnetic reconnection in the near-Earth magnetotail is performed by analyzing the set of ion diffusion region (DR) observations made by the Cluster and Geotail spacecraft during solar maximum and the declining phase. The occurrence rate is analyzed in terms of its dependence on both XGSM* and YGSM* (where coordinates are in the solar wind aberrated geocentric solar magnetospheric system). Within the limits of the statistics available to this study, we find the purely XGSM* -dependent occurrence rate to be roughly constant over a large portion of the near-Earth magnetotail. In contrast, we find the purely YGSM* -dependent occurrence rate to be biased towards dusk with a local maximum between 0RE ≤YGSM* ≤ 5RE. The YGSM* -dependent occurrence rate is then used to construct a quasi-2D formulation of the DR occurrence rate, which has explicit dependence on XGSM* and implicit dependence on YGSM*. The quasi-2D occurrence rate is then used to examine the predicted ephemeris of the Magnetospheric MultiScale (MMS) spacecraft. We estimate that, during its near-Earth magnetotail survey phase, MMS will likely observe 11±4 DR events. • The occurrence rate of events is calculated as a function of XGSM and YGSM. • The occurrence rate is used to estimate the number of events MMS will observe.