Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...
The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Goldstein, M. L.
2006-01-01
We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging
NASA Astrophysics Data System (ADS)
Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping
2011-03-01
In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...
Cosmic Bulk Flow and the Local Motion from Cosmicflows-2
NASA Astrophysics Data System (ADS)
Courtois, Helene M.; Hoffman, Yehuda; Tully, R. Brent
2015-08-01
Full sky surveys of peculiar velocity are arguably the best way to map the large scale structure out to distances of a few times 100 Mpc/h.Using the largest and most accurate ever catalog of galaxy peculiar velocities Cosmicflows-2, the large scale structure has been reconstructed by means of the Wiener filter and constrained realizations assuming as a Bayesian prior model the LCDM standard model of cosmology. The present paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R=500 Mpc/h. Our main results is that the estimated bulk flow is consistent with the LCDM model with the WMAP inferred cosmological parameters. At R=50 (150)Mpc/h the estimated bulk velocity is 250 +/- 21 (239 +/- 38) km/s. The corresponding cosmic variance at these radii is 126 (60) km/s, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ˜200 Mpc/h, where the cosmic variance on the individual Supergalactic Cartesian components (of the r.m.s. values) exceeds the variance of the constrined realizations by at least a factor of 2. The SGX and SGY components of the CMB dipole velocity are recovered by the Wiener Filter velocity field down to a very few km/s. The SGZ component of the estimated velocity, the one that is most affected by the Zone of Avoidance, is off by 126km/s (an almost 2 sigma discrepancy).The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.
30 CFR 56.6802 - Bulk delivery vehicles.
Code of Federal Regulations, 2010 CFR
2010-07-01
... § 56.6802 Bulk delivery vehicles. No welding or cutting shall be performed on a bulk delivery vehicle... cutting on a hollow shaft, the shaft shall be thoroughly cleaned inside and out and vented with a minimum...
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
NASA Astrophysics Data System (ADS)
Chantara, Somporn; Chunsuk, Nawarut
The chemical composition of 122 rainwater samples collected daily from bulk and wet-only collectors in a sub-urban area of Chiang Mai (Thailand) during August 2005-July 2006 has been analyzed and compared to assess usability of a cheaper and less complex bulk collector over a sophisticated wet-only collector. Statistical analysis was performed on log-transformed daily rain amount and depositions of major ions for each collector type. The analysis of variance (ANOVA) test revealed that the amount of rainfall collected from a rain gauge, bulk collector and wet-only collector showed no significant difference ( ∝=0.05). The volume weight mean electro-conductivity (EC) values of bulk and wet-only samples were 0.69 and 0.65 mS/m, respectively. The average pH of the samples from both types of collectors was 5.5. Scatter plots between log-transformed depositions of specific ions obtained from bulk and wet-only samples showed high correlation ( r>0.91). Means of log-transformed bulk deposition were 14% (Na + and K +), 13% (Mg 2+), 7% (Ca 2+), 4% (NO 3-), 3% (SO 42- and Cl -) and 2% (NH 4+) higher than that of wet-only deposition. However, multivariate analysis of variance (MANOVA) revealed that ion depositions obtained from bulk and wet-only collectors were not significantly different ( ∝=0.05). Therefore, it was concluded that a bulk collector can be used instead of a wet-only collector in a sub-urban area.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
Cosmic bulk flow and the local motion from Cosmicflows-2
NASA Astrophysics Data System (ADS)
Hoffman, Yehuda; Courtois, Hélène M.; Tully, R. Brent
2015-06-01
Full sky surveys of peculiar velocity are arguably the best way to map the large-scale structure (LSS) out to distances of a few × 100 h-1 Mpc. Using the largest and most accurate ever catalogue of galaxy peculiar velocities Cosmicflows-2, the LSS has been reconstructed by means of the Wiener filter (WF) and constrained realizations (CRs) assuming as a Bayesian prior model the Λ cold dark matter model with the WMAP inferred cosmological parameters. This paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R = 500 h-1 Mpc. The estimated LSS, in general, and the bulk flow, in particular, are determined by the tension between the observational data and the assumed prior model. A pre-requisite for such an analysis is the requirement that the estimated bulk flow is consistent with the prior model. Such a consistency is found here. At R = 50 (150) h-1 Mpc, the estimated bulk velocity is 250 ± 21 (239 ± 38) km s-1. The corresponding cosmic variance at these radii is 126 (60) km s-1, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ≈ 200 h-1 Mpc, where the cosmic variance on the individual supergalactic Cartesian components (of the rms values) exceeds the variance of the CRs by at least a factor of 2. The SGX and SGY components of the cosmic microwave background dipole velocity are recovered by the WF velocity field down to a very few km s-1. The SGZ component of the estimated velocity, the one that is most affected by the zone of avoidance, is off by 126 km s-1 (an almost 2σ discrepancy). The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Constraining the local variance of H {sub 0} from directional analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bengaly, C.A.P. Jr., E-mail: carlosap@on.br
We evaluate the local variance of the Hubble Constant H {sub 0} with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H {sub 0} from standard candles ( H {sub 0} = 73.8±2.4 km s{sup -1} Mpc {sup -1}) with that of the Planck's Cosmic Microwave Background data ( H {sub 0} = 67.8 ± 0.9km s{sup -1} Mpc{sup -1}). We obtain that H {sub 0} ranges from 68.9±0.5 km s{sup -1} Mpc{sup -1}more » to 71.2±0.7 km s{sup -1} Mpc{sup -1} through the celestial sphere (1 σ uncertainty), implying a Hubble Constant maximal variance of δ H {sub 0} = (2.30±0.86) km s{sup -1} Mpc{sup -1} towards the ( l,b ) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H {sub 0} variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H {sub 0} value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H {sub 0} determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.« less
Characterization of large price variations in financial markets
NASA Astrophysics Data System (ADS)
Johansen, Anders
2003-06-01
Statistics of drawdowns (loss from the last local maximum to the next local minimum) plays an important role in risk assessment of investment strategies. As they incorporate higher (> two) order correlations, they offer a better measure of real market risks than the variance or other cumulants of daily (or some other fixed time scale) of returns. Previous results have shown that the vast majority of drawdowns occurring on the major financial markets have a distribution which is well represented by a stretched exponential, while the largest drawdowns are occurring with a significantly larger rate than predicted by the bulk of the distribution and should thus be characterized as outliers (Eur. Phys. J. B 1 (1998) 141; J. Risk 2001). In the present analysis, the definition of drawdowns is generalized to coarse-grained drawdowns or so-called ε-drawdowns and a link between such ε- outliers and preceding log-periodic power law bubbles previously identified (Quantitative Finance 1 (2001) 452) is established.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Graham, S L; Barling, K S; Waghela, S; Scott, H M; Thompson, J A
2005-06-10
Environmental factors that enhance either the survivability or dispersion of Salmonella enterica serovar Typhimurium (S. Typhimurium) could result in a spatial pattern of disease risk. The objectives of this study were to: (1) describe herd status based on antibody response to Salmonella Typhimurium as estimated from bulk tank milk samples and (2) to describe the resulting geographical patterns found among Texas dairy herds. Eight hundred and fifty-two bulk milk samples were collected from georeferenced dairy farms and assayed by an indirect enzyme-linked immunosorbent assay (ELISA) using S. Typhimurium lipopolysaccharide (LPS). ELISA signal-to-noise ratios for each bulk tank milk sample were calculated and used for geostatistical analyses. Best-fit parameters for the exponential theoretical variogram included a range of 438.8 km, partial sill of 0.060 and nugget of 0.106. The partial sill is the classical geostatistical term for the variance that can be explained by the herd's location and the nugget is the spatially random component of the variance. We have identified a spatial process in bulk milk tank titers for S. Typhimurium in Texas dairy herds and present a map of the expected smoothed surface. Areas with higher expected titers should be targeted in further studies on controlling Salmonella infection with environmental modifications.
On methods of estimating cosmological bulk flows
NASA Astrophysics Data System (ADS)
Nusser, Adi
2016-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
40 CFR 761.61 - PCB remediation waste.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Subpart N of this part provides a method for collecting new site characterization data or for assessing... left after cleanup is completed. (i) Bulk PCB remediation waste. Bulk PCB remediation waste includes... similar material of minimum thickness spread over the area where remediation waste was removed or left in...
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
Variability and Maintenance of Turbulence in the Very Stable Boundary Layer
NASA Astrophysics Data System (ADS)
Mahrt, Larry
2010-04-01
The relationship of turbulence quantities to mean flow quantities, such as the Richardson number, degenerates substantially for strong stability, at least in those studies that do not place restrictions on minimum turbulence or non-stationarity. This study examines the large variability of the turbulence for very stable conditions by analyzing four months of turbulence data from a site with short grass. Brief comparisons are made with three additional sites, one over short grass on flat terrain and two with tall vegetation in complex terrain. For very stable conditions, any dependence of the turbulence quantities on the mean wind speed or bulk Richardson number becomes masked by large scatter, as found in some previous studies. The large variability of the turbulence quantities is due to random variations and other physical influences not represented by the bulk Richardson number. There is no critical Richardson number above which the turbulence vanishes. For very stable conditions, the record-averaged vertical velocity variance and the drag coefficient increase with the strength of the submeso motions (wave motions, solitary waves, horizontal modes and numerous more complex signatures). The submeso motions are on time scales of minutes and not normally considered part of the mean flow. The generation of turbulence by such unpredictable motions appears to preclude universal similarity theory for predicting the surface stress for very stable conditions. Large variation of the stress direction with respect to the wind direction for the very stable regime is also examined. Needed additional work is noted.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Microwave dielectric spectrum of rocks
NASA Technical Reports Server (NTRS)
Ulaby, F. T.; Bengal, T.; East, J.; Dobson, M. C.; Garvin, J.; Evans, D.
1988-01-01
A combination of several measurement techniques was used to investigate the dielectric properties of 80 rock samples in the microwave region. The real part of the dielectric constant, epsilon', was measured in 0.1 GHz steps from 0.5 to 18 GHz, and the imaginary part, epsilon'', was measured at five frequencies extending between 1.6 and 16 GHz. In addition to the dielectric measurements, the bulk density was measured for all the samples and the bulk chemical composition was determined for 56 of the samples. The study shows that epsilon' is frequency-dependent over the 0.5 to 18 GHz range for all rock samples, and that the bulk density rho accounts for about 50 percent of the observed variance of epsilon'. For individual rock types (by genesis), about 90 percent of the observed variance may be explained by the combination of density and the fractional contents of SiO2, Fe2O3, MgO, and TiO2. For the loss factor epsilon'', it was not possible to establish statistically significant relationships between it and the measured properties of the rock samples (density and chemical composition).
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Some refinements on the comparison of areal sampling methods via simulation
Jeffrey Gove
2017-01-01
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
A test of source-surface model predictions of heliospheric current sheet inclination
NASA Technical Reports Server (NTRS)
Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.
1994-01-01
The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado
Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.
2003-01-01
The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Infrared Emitters and Photodetectors with InAsSb Bulk Active Region
2013-04-29
SLS) buffers on GaSb substrates [9]. By that time, 145 meV (A.= 8.6 J.lm) was reported to be the minimum energy gap for the bulk lnAsSb alloys at 77...substrate side (b) GaSb substrate thinned to 200iJm Figure 5. (a) The band diagram of the heterostructure with the undoped bulk InAsSb0.2 layer...shift of the EL energy peak compared to the PL peak at/, ... I 0 1-1m is explained by band filling under electrical injection. A sublinear
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Thermodynamics of coupled protein adsorption and stability using hybrid Monte Carlo simulations.
Zhong, Ellen D; Shirts, Michael R
2014-05-06
A better understanding of changes in protein stability upon adsorption can improve the design of protein separation processes. In this study, we examine the coupling of the folding and the adsorption of a model protein, the B1 domain of streptococcal protein G, as a function of surface attraction using a hybrid Monte Carlo (HMC) approach with temperature replica exchange and umbrella sampling. In our HMC implementation, we are able to use a molecular dynamics (MD) time step that is an order of magnitude larger than in a traditional MD simulation protocol and observe a factor of 2 enhancement in the folding and unfolding rate. To demonstrate the convergence of our systems, we measure the travel of our order parameter the fraction of native contacts between folded and unfolded states throughout the length of our simulations. Thermodynamic quantities are extracted with minimum statistical variance using multistate reweighting between simulations at different temperatures and harmonic distance restraints from the surface. The resultant free energies, enthalpies, and entropies of the coupled unfolding and absorption processes are in qualitative agreement with previous experimental and computational observations, including entropic stabilization of the adsorbed, folded state relative to the bulk on surfaces with low attraction.
Bioreactor Landfills State-Of-The Practice Review
Recently approved regulations by the U.S. Environmental Protection Agency (EPA) give approved states the power to grant landfill variance under Subtitle D by allowing these landfills to introduce bulk liquids into the solid waste mass. These types of landfills are called bioreac...
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
The effect of thermal variance on the phenotype of marine turtle offspring.
Horne, C R; Fuller, W J; Godley, B J; Rhodes, K A; Snape, R; Stokes, K L; Broderick, A C
2014-01-01
Temperature can have a profound effect on the phenotype of reptilian offspring, yet the bulk of current research considers the effects of constant incubation temperatures on offspring morphology, with few studies examining the natural thermal variance that occurs in the wild. Over two consecutive nesting seasons, we placed temperature data loggers in 57 naturally incubating clutches of loggerhead sea turtles Caretta caretta and found that greater diel thermal variance during incubation significantly reduced offspring mass, potentially reducing survival of hatchlings during their journey from the nest to offshore waters and beyond. With predicted scenarios of climate change, behavioral plasticity in nest site selection may be key for the survival of ectothermic species, particularly those with temperature-dependent sex determination.
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
A MAD Explanation for the Correlation between Bulk Lorentz Factor and Minimum Variability Timescale
NASA Astrophysics Data System (ADS)
Lloyd-Ronning, Nicole; Lei, Wei-hua; Xie, Wei
2018-04-01
We offer an explanation for the anti-correlation between the minimum variability timescale (MTS) in the prompt emission light curve of gamma-ray bursts (GRBs) and the estimated bulk Lorentz factor of these GRBs, in the context of a magnetically arrested disk (MAD) model. In particular, we show that previously derived limits on the maximum available energy per baryon in a Blandford-Znajek jet leads to a relationship between the characteristic MAD timescale in GRBs and the maximum bulk Lorentz factor: tMAD∝Γ-6, somewhat steeper than (although within the error bars of) the fitted relationship found in the GRB data. Similarly, the MAD model also naturally accounts for the observed anti-correlation between MTS and gamma-ray luminosity L in the GRB data, and we estimate the accretion rates of the GRB disk (given these luminosities) in the context of this model. Both of these correlations (MTS - Γ and MTS - L) are also observed in the AGN data, and we discuss the implications of our results in the context of both GRB and blazar systems.
A second-order bulk boundary-layer model
NASA Technical Reports Server (NTRS)
Randall, David A.; Shao, Qingqiu; Moeng, Chin-Hoh
1992-01-01
Bulk mass-flux models represent the large eddies that are primarily responsible for the turbulent fluxes in the planetary boundary layer as convective circulations, with an associated convective mass flux. In order for such models to be useful, it is necessary to determine the fractional area covered by rising motion in the convective circulations. This fraction can be used as an estimate of the cloud amount, under certain conditions. 'Matching' conditions have been developed that relate the convective mass flux to the ventilation and entrainment mass fluxes. These are based on conservation equations for the scalar means and variances in the entrainment and ventilation layers. Methods are presented to determine both the fractional area covered by rising motion and the convective mass flux. The requirement of variance balance is used to relax the 'well-mixed' assumption. The vertical structures of the mean state and the turbulent fluxes are determined analytically. Several aspects of this simple model's formulation are evaluated using results from large-eddy simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, Bruce; Fang, Ming; Ghate, Virendra
2016-02-01
Observations from an upward-pointing Doppler cloud radar are used to examine cloud-top entrainment processes and parameterizations in a non-precipitating continental stratocumulus cloud deck maintained by time varying surface buoyancy fluxes and cloud-top radiative cooling. Radar and ancillary observations were made at the Atmospheric Radiation Measurement (ARM)’s Southern Great Plains (SGP) site located near Lamont, Oklahoma of unbroken, non-precipitating stratocumulus clouds observed for a 14-hour period starting 0900 Central Standard Time on 25 March 2005. The vertical velocity variance and energy dissipation rate (EDR) terms in a parameterized turbulence kinetic energy (TKE) budget of the entrainment zone are estimated using themore » radar vertical velocity and the radar spectrum width observations from the upward-pointing millimeter cloud radar (MMCR) operating at the SGP site. Hourly averages of the vertical velocity variance term in the TKE entrainment formulation correlates strongly (r=0.72) to the dissipation rate term in the entrainment zone. However, the ratio of the variance term to the dissipation decreases at night due to decoupling of the boundary layer. When the night -time decoupling is accounted for, the correlation between the variance and the EDR term increases (r=0.92). To obtain bulk coefficients for the entrainment parameterizations derived from the TKE budget, independent estimate of entrainment were obtained from an inversion height budget using ARM SGP observations of the local time derivative and the horizontal advection of the cloud-top height. The large-scale vertical velocity at the inversion needed for this budget from EMWF reanalysis. This budget gives a mean entrainment rate for the observing period of 0.76±0.15 cm/s. This mean value is applied to the TKE budget parameterizations to obtain the bulk coefficients needed in these parameterizations. These bulk coefficients are compared with those from previous and are used to in the parameterizations to give hourly estimates of the entrainment rates using the radar derived vertical velocity variance and dissipation rates. Hourly entrainment rates were estimated from a convective velocity w* parameterization depends on the local surface buoyancy fluxes and the calculated radiative flux divergence, parameterization using a bulk coefficient obtained from the mean inversion height budget. The hourly rates from the cloud turbulence estimates and the w* parameterization, which is independent of the radar observations, are compared with the hourly we values from the budget. All show rough agreement with each other and capture the entrainment variability associated with substantial changes in the surface flux and radiative divergence at cloud top. Major uncertainties in the hourly estimates from the height budget and w* are discussed. The results indicate a strong potential for making entrainment rate estimates directly from the radar vertical velocity variance and the EDR measurements—a technique that has distinct advantages over other methods for estimating entrainment rates. Calculations based on the EDR alone can provide high temporal resolution (for averaging intervals as small as 10 minutes) of the entrainment processes and do not require an estimate of the boundary layer depth, which can be difficult to define when the boundary layer is decoupled.« less
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
Pontes Júnior, V A; Melo, P G S; Pereira, H S; Melo, L C
2016-09-02
Grain yield is strongly influenced by the environment, has polygenic and complex inheritance, and is a key trait in the selection and recommendation of cultivars. Breeding programs should efficiently explore the genetic variability resulting from crosses by selecting the most appropriate method for breeding in segregating populations. The goal of this study was to evaluate and compare the genetic potential of common bean progenies of carioca grain for grain yield, obtained by different breeding methods and evaluated in different environments. Progenies originating from crosses between lines and CNFC 7812 and CNFC 7829 were replanted up to the F 7 generation using three breeding methods in segregating populations: population (bulk), bulk within F 2 progenies, and single-seed descent (SSD). Fifteen F 8 progenies per method, two controls (BRS Estilo and Perola), and the parents were evaluated in a 7 x 7 simple lattice design, with plots of two 4-m rows. The tests were conducted in 10 environments in four States of Brazil and in three growing seasons in 2009 and 2010. Genetic parameters including genetic variance, heritability, variance of interaction, and expected selection gain were estimated. Genetic variability among progenies and the effect of progeny-environment interactions were determined for the three methods. The breeding methods differed significantly due to the effects of sampling procedures on the progenies and due to natural selection, which mainly affected the bulk method. The SSD and bulk methods provided populations with better estimates of genetic parameters and more stable progenies that were less affected by interaction with the environment.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
Maxon and roton measurements in nanoconfined 4He
NASA Astrophysics Data System (ADS)
Bryan, M. S.; Sokol, P. E.
2018-05-01
We investigate the behavior of the collective excitations of adsorbed 4He in an ordered hexagonal mesopore, examining the crossover from a thin film to a confined fluid. Here, we present the inelastic scattering results as a function of filling at constant temperature. We find a monotonic transition of the maxon excitation as a function of filling. This has been interpreted as corresponding to an increasing density of the adsorbed helium, which approaches the bulk value as filling increases. The roton minimum exhibits a more complicated behavior that does not monotonically approach bulk values as filling increases. The full pore scattering resembles the bulk liquid accompanied by a layer mode. The maxon and roton scattering, taken together, at intermediate fillings does not correspond to a single bulk liquid dispersion at negative, low, or high pressure.
Crystallite-size dependency of the pressure and temperature response in nanoparticles of magnesia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodenbough, Philip P.; Chan, Siu-Wai
We have carefully measured the hydrostatic compressibility and thermal expansion for a series of magnesia nanoparticles. We found a strong variance in these mechanical properties as crystallite size changed. For decreasing crystallite sizes, bulk modulus first increased, then reached a modest maximum of 165 GPa at an intermediate crystallite size of 14 nm, and then decreased thereafter to 77 GPa at 9 nm. Thermal expansion, meanwhile, decreased continuously to 70% of bulk value at 9 nm. These results are consistent to nano-ceria and together provide important insights into the thermal-mechanical structural properties of oxide nanoparticles.
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
Motivation, Engagement, and Social Climate: An International Study of Boarding Schools
ERIC Educational Resources Information Center
Martin, Andrew J.; Papworth, Brad; Ginns, Paul; Malmberg, Lars-Erik
2016-01-01
Most educational climate research is conducted among (day school) students who spend the bulk of their young lives outside of school, potentially limiting the amount of climate variance that can be captured. Boarding school students, on the other hand, spend much of their lives at school and thus offer a potentially unique perspective on…
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Dong, Yu-Ping; Liu, Xue-Yan; Sun, Xin-Chao; Song, Wei; Zheng, Xu-Dong; Li, Rui; Liu, Cong-Qiang
2017-11-01
Moss nitrogen (N) concentrations and natural 15 N abundance (δ 15 N values) have been widely employed to evaluate annual levels and major sources of atmospheric N deposition. However, different moss species and one-off sampling were often used among extant studies, it remains unclear whether moss N parameters differ with species and different samplings, which prevented more accurate assessment of N deposition via moss survey. Here concentrations, isotopic ratios of bulk carbon (C) and bulk N in natural epilithic mosses (Bryum argenteum, Eurohypnum leptothallum, Haplocladium microphyllum and Hypnum plumaeforme) were measured monthly from August 2006 to August 2007 at Guiyang, SW China. The H. plumaeforme had significantly (P < 0.05) lower bulk N concentrations and higher δ 13 C values than other species. Moss N concentrations were significantly (P < 0.05) lower in warmer months than in cooler months, while moss δ 13 C values exhibited an opposite pattern. The variance component analyses showed that different species contributed more variations of moss N concentrations and δ 13 C values than different samplings. Differently, δ 15 N values did not differ significantly between moss species, and its variance mainly reflected variations of assimilated N sources, with ammonium as the dominant contributor. These results unambiguously reveal the influence of inter-species and intra-annual variations of moss N utilization on N deposition assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Goeritno, Arief; Rasiman, Syofyan
2017-06-01
Performance examination of the bulk oil circuit breaker that is influenced by its parameters at the Substation of Bogor Baru (the State Electricity Company = PLN) has been done. It is found that (1) dielectric strength of oil still qualifies as an insulating and cooling medium, because the average value of the measurement result is still above the minimum value allowed, where the minimum limit of 80 kV/2.5 cm or 32 kV/cm; (2) the simultaneity of the CB's contacts is still eligible, so that the BOCB can still be operated, because the difference of time between the highest and lowest values when the BOCB's contacts are opened/closed are less than (Δt<) 10 milliseconds (if meeting the PLN standards as recommended by Alsthom); and (3) the parameter of resistance according to the standards, where (i) the resistance of insulation has a value far above the allowed threshold, while the minimum standards are above 2,000 Mn (if meeting the ANSI standards) or on the value of 2,000 MΩ (if meeting PLN standards), (ii) the resistance of contacts has a value far above the allowed threshold, while the minimum standards are below 350 µΩ (if meeting ANSI standards) or on the value of 200 µΩ (if meeting PLN standards). The resistance of grounding is equal to the maximum limit specified, while the maximum standard is on the value of 0.5 Ω (if meeting PLN standard).
Code of Federal Regulations, 2014 CFR
2014-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Code of Federal Regulations, 2013 CFR
2013-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Tsang, Sai-Wing; Chen, Song; So, Franky
2013-05-07
Using charge modulated electroabsorption spectroscopy (CMEAS), for the first time, the energy level alignment of a polymer:fullerene bulk heterojunction photovoltaic cell is directly measured. The charge-transfer excitons generated by the sub-bandgap optical pumping are coupled with the modulating electric field and introduce subtle changes in optical absorption in the sub-bandgap region. This minimum required energy for sub-bandgap charge genreation is defined as the effective bandgap. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bulk locality and boundary creating operators
Nakayama, Yu; Ooguri, Hirosi
2015-10-19
Here, we formulate a minimum requirement for CFT operators to be localized in the dual AdS. In any spacetime dimensions, we show that a general solution to the requirement is a linear superposition of operators creating spherical boundaries in CFT, with the dilatation by the imaginary unit from their centers. This generalizes the recent proposal by Miyaji et al. for bulk local operators in the three dimensional AdS. We show that Ishibashi states for the global conformal symmetry in any dimensions and with the imaginary di-latation obey free field equations in AdS and that incorporating bulk interactions require their superpositions.more » We also comment on the recent proposals by Kabat et al., and by H. Verlinde.« less
NASA Astrophysics Data System (ADS)
Huang, Huan; Zheng, Jun; Zheng, Botian; Qian, Nan; Li, Haitao; Li, Jipeng; Deng, Zigang
2017-10-01
In order to clarify the correlations between magnetic flux and levitation force of the high-temperature superconducting (HTS) bulk, we measured the magnetic flux density on bottom and top surfaces of a bulk superconductor while vertically moving above a permanent magnet guideway (PMG). The levitation force of the bulk superconductor was measured simultaneously. In this study, the HTS bulk was moved down and up for three times between field-cooling position and working position above the PMG, followed by a relaxation measurement of 300 s at the minimum height position. During the whole processes, the magnetic flux density and levitation force of the bulk superconductor were recorded and collected by a multipoint magnetic field measurement platform and a self-developed maglev measurement system, respectively. The magnetic flux density on the bottom surface reflected the induced field in the superconductor bulk, while on the top, it reveals the penetrated magnetic flux. The results show that the magnetic flux density and levitation force of the bulk superconductor are in direct correlation from the viewpoint of inner supercurrent. In general, this work is instructive for understanding the connection of the magnetic flux density, the inner current density and the levitation behavior of HTS bulk employed in a maglev system. Meanwhile, this magnetic flux density measurement method has enriched present experimental evaluation methods of maglev system.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Husby, Arild; Gustafsson, Lars; Qvarnström, Anna
2012-01-01
The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Kitabatake, M.; Fons, P.; Greene, J. E.
1991-01-01
The relaxation, diffusion, and annihilation of split and hexagonal interstitials resulting from 10 eV Si irradiation of (2x1)-terminated Si(100) are investigated. Molecular dynamics and quasidynamics simulations, utilizing the Tersoff many-body potential are used in the investigation. The interstitials are created in layers two through six, and stable atomic configurations and total potential energies are derived as a function of site symmetry and layer depth. The interstitial Si atoms are allowed to diffuse, and the total potential energy changes are calculated. Lattice configurations along each path, as well as the starting configurations, are relaxed, and minimum energy diffusion paths are derived. The results show that the minimum energy paths are toward the surface and generally involved tetrahedral sites. The calculated interstitial migration activation energies are always less than 1.4 eV and are much lower in the near-surface region than in the bulk.
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
NASA Astrophysics Data System (ADS)
Vikram, Ajit; Chowdhury, Prabudhya Roy; Phillips, Ryan K.; Hoorfar, Mina
2016-07-01
This paper describes a measurement technique developed for the determination of the effective electrical bulk resistance of the gas diffusion layer (GDL) and the contact resistance distribution at the interface of the GDL and the bipolar plate (BPP). The novelty of this study is the measurement and separation of the bulk and contact resistance under inhomogeneous compression, occurring in an actual fuel cell assembly due to the presence of the channels and ribs on the bipolar plates. The measurement of the electrical contact resistance, contributing to nearly two-third of the ohmic losses in the fuel cell assembly, shows a non-linear distribution along the GDL/BPP interface. The effective bulk resistance of the GDL under inhomogeneous compression showed a decrease of nearly 40% compared to that estimated for homogeneous compression at different compression pressures. Such a decrease in the effective bulk resistance under inhomogeneous compression could be due to the non-uniform distribution of pressure under the ribs and the channels. This measurement technique can be used to identify optimum GDL, BPP and channel-rib structures based on minimum bulk and contact resistances measured under inhomogeneous compression.
46 CFR 148.70 - Dangerous cargo manifest; general.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Dangerous cargo manifest; general. 148.70 Section 148.70 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.70 Dangerous cargo...
46 CFR 148.70 - Dangerous cargo manifest; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Dangerous cargo manifest; general. 148.70 Section 148.70 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.70 Dangerous cargo...
46 CFR 148.70 - Dangerous cargo manifest; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Dangerous cargo manifest; general. 148.70 Section 148.70 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.70 Dangerous cargo...
46 CFR 148.70 - Dangerous cargo manifest; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Dangerous cargo manifest; general. 148.70 Section 148.70 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.70 Dangerous cargo...
46 CFR 148.80 - Supervision of cargo transfer.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Supervision of cargo transfer. 148.80 Section 148.80 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.80 Supervision of cargo...
46 CFR 148.80 - Supervision of cargo transfer.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Supervision of cargo transfer. 148.80 Section 148.80 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.80 Supervision of cargo...
46 CFR 148.80 - Supervision of cargo transfer.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Supervision of cargo transfer. 148.80 Section 148.80 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.80 Supervision of cargo...
46 CFR 148.80 - Supervision of cargo transfer.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Supervision of cargo transfer. 148.80 Section 148.80 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.80 Supervision of cargo...
46 CFR 148.100 - Log book entries.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Log book entries. 148.100 Section 148.100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.100 Log book entries. During...
46 CFR 148.100 - Log book entries.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Log book entries. 148.100 Section 148.100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.100 Log book entries. During...
46 CFR 148.100 - Log book entries.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Log book entries. 148.100 Section 148.100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.100 Log book entries. During...
46 CFR 148.100 - Log book entries.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Log book entries. 148.100 Section 148.100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.100 Log book entries. During...
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Abu El-Enin, Mohammed Abu Bakr; Al-Ghaffar Hammouda, Mohammed El-Sayed Abd; El-Sherbiny, Dina Tawfik; El-Wasseef, Dalia Rashad; El-Ashry, Saadia Mahmoud
2016-02-01
A valid, sensitive and rapid spectrofluorimetric method has been developed and validated for determination of both tadalafil (TAD) and vardenafil (VAR) either in their pure form, in their tablet dosage forms or spiked in human plasma. This method is based on measurement of the native fluorescence of both drugs in acetonitrile at λem 330 and 470 nm after excitation at 280 and 275 nm for tadalafil and vardenafil, respectively. Linear relationships were obtained over the concentration range 4-40 and 10-250 ng/mL with a minimum detection of 1 and 3 ng/mL for tadalafil and vardenafil, respectively. Various experimental parameters affecting the fluorescence intensity were carefully studied and optimized. The developed method was applied successfully for the determination of tadalafil and vardenafil in bulk drugs and tablet dosage forms. Moreover, the high sensitivity of the proposed method permitted their determination in spiked human plasma. The developed method was validated in terms of specificity, linearity, lower limit of quantification (LOQ), lower limit of detection (LOD), precision and accuracy. The mean recoveries of the analytes in pharmaceutical preparations were in agreement with those obtained from the comparison methods, as revealed by statistical analysis of the obtained results using Student's t-test and the variance ratio F-test. Copyright © 2015 John Wiley & Sons, Ltd.
46 CFR 148.03-7 - During transport.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false During transport. 148.03-7 Section 148.03-7 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF SOLID HAZARDOUS MATERIALS IN BULK Minimum Transportation Requirements § 148.03-7 During transport. During the transport of a...
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
NASA Astrophysics Data System (ADS)
Rudnick, R. L.; Liu, X.
2011-12-01
The continental crust has an "intermediate" bulk composition that is distinct from primary melts of peridotitic mantle (basalt or picrite). This mismatch between the "building blocks" and the "edifice" of the continental crust points to the operation of processes that preferentially remove mafic to ultramafic material from the continents. Such processes include lower crustal recycling (via density foundering or lower crustal subduction - e.g., relamination, Hacker et al., 2011, EPSL), generation of evolved melts via slab melting, and/or chemical weathering. Stable isotope systems document the influence of chemical weathering on the bulk crust composition: the oxygen isotope composition of the bulk crust is distinctly heavier than that of primary, mantle-derived melts (Simon and Lecuyer, 2005, G-cubed) and the Li isotopic composition of the bulk crust is distinctly lighter than that of mantle-derive melts (Teng et al., 2004, GCA; 2008, Chem. Geol.). Both signatures mark the imprint of chemical weathering on the bulk crust composition. Here, we use a simple mass balance model for lithium inputs and outputs from the continental crust to quantify the mass lost due to chemical weathering. We find that a minimum of 15%, a maximum of 60%, and a best estimate of ~40% of the original juvenile rock mass may have been lost via chemical weathering. The accumulated percentage of mass loss due to chemical weathering leads to an average global chemical weathering rate (CWR) of ~ 8×10^9 to 2×10^10 t/yr since 3.5 Ga, which is about an order of magnitude higher than the minimum estimates based on modern rivers (Gaillardet et al., 1999, Chem. Geol.). While we cannot constrain the exact portion of crustal mass loss via chemical weathering, given the uncertainties of the calculation, we can demonstrate that the weathering flux is non-zero. Therefore, chemical weathering must play a role in the evolution of the composition and mass of the continental crust.
Constraints on continental crustal mass loss via chemical weathering using lithium and its isotopes
NASA Astrophysics Data System (ADS)
Rudnick, R. L.; Liu, X. M.
2012-04-01
The continental crust has an "intermediate" bulk composition that is distinct from primary melts of peridotitic mantle (basalt or picrite). This mismatch between the "building blocks" and the "edifice" that is the continental crust points to the operation of processes that preferentially remove mafic to ultramafic material from the continents. Such processes include lower crustal recycling (via density foundering or lower crustal subduction - e.g., relamination, Hacker et al., 2011, EPSL), generation of evolved melts via slab melting, and/or chemical weathering. Stable isotope systems point to the influence of chemical weathering on the bulk crust composition: the oxygen isotope composition of the bulk crust is distinctly heavier than that of primary, mantle-derived melts (Simon and Lecuyer, 2005, G-cubed) and the Li isotopic composition of the bulk crust is distinctly lighter than that of mantle-derive melts (Teng et al., 2004, GCA; 2008, Chem. Geol.). Both signatures mark the imprint of chemical weathering on the bulk crust composition. Here, we use a simple mass balance model for lithium inputs and outputs from the continental crust to quantify the mass lost due to chemical weathering. We find that a minimum of 15%, a maximum of 60%, and a best estimate of ~40% of the original juvenile rock mass may have been lost via chemical weathering. The accumulated percentage of mass loss due to chemical weathering leads to an average global chemical weathering rate (CWR) of ~ 1×10^10 to 2×10^10 t/yr since 3.5 Ga, which is about an order of magnitude higher than the minimum estimates based on modern rivers (Gaillardet et al., 1999, Chem. Geol.). While we cannot constrain the exact portion of crustal mass loss via chemical weathering, given the uncertainties of the calculation, we can demonstrate that the weathering flux is non-zero. Therefore, chemical weathering must play a role in the evolution of the composition and mass of the continental crust.
Chang, Hao-Xun; Haudenshield, James S.; Bowen, Charles R.; Hartman, Glen L.
2017-01-01
Areas within an agricultural field in the same season often differ in crop productivity despite having the same cropping history, crop genotype, and management practices. One hypothesis is that abiotic or biotic factors in the soils differ between areas resulting in these productivity differences. In this study, bulk soil samples collected from a high and a low productivity area from within six agronomic fields in Illinois were quantified for abiotic and biotic characteristics. Extracted DNA from these bulk soil samples were shotgun sequenced. While logistic regression analyses resulted in no significant association between crop productivity and the 26 soil characteristics, principal coordinate analysis and constrained correspondence analysis showed crop productivity explained a major proportion of the taxa variance in the bulk soil microbiome. Metagenome-wide association studies (MWAS) identified more Bradyrhizodium and Gammaproteobacteria in higher productivity areas and more Actinobacteria, Ascomycota, Planctomycetales, and Streptophyta in lower productivity areas. Machine learning using a random forest method successfully predicted productivity based on the microbiome composition with the best accuracy of 0.79 at the order level. Our study showed that crop productivity differences were associated with bulk soil microbiome composition and highlighted several nitrogen utility-related taxa. We demonstrated the merit of MWAS and machine learning for the first time in a plant-microbiome study. PMID:28421041
SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shojaei, M; Dumitru, N; Pella, S
2016-06-15
Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less
Bulk Superconductors in Mobile Application
NASA Astrophysics Data System (ADS)
Werfel, F. N.; Delor, U. Floegel-; Rothfeld, R.; Riedel, T.; Wippich, D.; Goebel, B.; Schirrmeister, P.
We investigate and review concepts of multi - seeded REBCO bulk superconductors in mobile application. ATZ's compact HTS bulk magnets can trap routinely 1 T@77 K. Except of magnetization, flux creep and hysteresis, industrial - like properties as compactness, power density, and robustness are of major device interest if mobility and light-weight construction is in focus. For mobile application in levitated trains or demonstrator magnets we examine the performance of on-board cryogenics either by LN2 or cryo-cooler application. The mechanical, electric and thermodynamical requirements of compact vacuum cryostats for Maglev train operation were studied systematically. More than 30 units are manufactured and tested. The attractive load to weight ratio is more than 10 and favours group module device constructions up to 5 t load on permanent magnet (PM) track. A transportable and compact YBCO bulk magnet cooled with in-situ 4 Watt Stirling cryo-cooler for 50 - 80 K operation is investigated. Low cooling power and effective HTS cold mass drives the system construction to a minimum - thermal loss and light-weight design.
Ferroelectric hydration shells around proteins: electrostatics of the protein-water interface.
LeBard, David N; Matyushov, Dmitry V
2010-07-22
Numerical simulations of hydrated proteins show that protein hydration shells are polarized into a ferroelectric layer with large values of the average dipole moment magnitude and the dipole moment variance. The emergence of the new polarized mesophase dramatically alters the statistics of electrostatic fluctuations at the protein-water interface. The linear response relation between the average electrostatic potential and its variance breaks down, with the breadth of the electrostatic fluctuations far exceeding the expectations of the linear response theories. The dynamics of these non-Gaussian electrostatic fluctuations are dominated by a slow (approximately = 1 ns) component that freezes in at the temperature of the dynamical transition of proteins. The ferroelectric shell propagates 3-5 water diameters into the bulk.
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Biver, Marc; Filella, Montserrat
2016-05-03
The toxicity of Cd being well established and that of Te suspected, the bulk, surface-normalized steady-state dissolution rates of two industrially important binary tellurides-polycrystalline cadmium and bismuth tellurides- were studied over the pH range 3-11, at various temperatures (25-70 °C) and dissolved oxygen concentrations (0-100% O2 in the gas phase). The behavior of both tellurides is strikingly different. The dissolution rates of CdTe monotonically decreased with increasing pH, the trend becoming more pronounced with increasing temperature. Activation energies were of the order of magnitude associated with surface controlled processes; they decreased with decreasing acidity. At pH 7, the CdTe dissolution rate increased linearly with dissolved oxygen. In anoxic solution, CdTe dissolved at a finite rate. In contrast, the dissolution rate of Bi2Te3 passed through a minimum at pH 5.3. The activation energy had a maximum in the rate minimum at pH 5.3 and fell below the threshold for diffusion control at pH 11. No oxygen dependence was detected. Bi2Te3 dissolves much more slowly than CdTe; from one to more than 3.5 orders of magnitude in the Bi2Te3 rate minimum. Both will readily dissolve under long-term landfill deposition conditions but comparatively slowly.
NASA Astrophysics Data System (ADS)
Narayan, Paresh Kumar
2008-05-01
The goal of this paper is to examine the relative importance of permanent and transitory shocks in explaining variations in macroeconomic aggregates for the UK at business cycle horizons. Using the common trend-common cycle restrictions, we estimate a variance decomposition of shocks, and find that over short horizons the bulk of the variations in income and consumption were due to permanent shocks while transitory shocks explain the bulk of the variations in investment. Our findings for income and consumption are consistent with real business cycle models which emphasize the role of aggregate supply shocks, while our findings for investment are consistent with the Keynesian school of thought, which emphasizes the role of aggregate demand shocks in explaining business cycles.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Wallner, P; Ruile, W; Weigel, R
2000-01-01
Theoretical studies on the behavior of leaky-SAW (LSAW) properties in layered structures were performed. For these calculations rotYX LiTaO (3) and rotYX LiNbO(3) LSAW crystal cuts were used, assuming different layer materials. For LSAWs both the velocity and the inherent loss due to bulk wave emission into the substrate are strongly influenced by distinct layer parameters. As a result, these layer properties like elastic constants or thickness have shown a strong influence on the crystal cut angle of minimum LSAW loss. Moreover, for soft and stiff layer materials, a different shift of the LSAW loss minimum can occur. Therefore, using double-layer structures, the shift of the LSAW loss minimum can be influenced by appropriate chosen layers and ratios.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Composting of cow dung and crop residues using termite mounds as bulking agent.
Karak, Tanmoy; Sonar, Indira; Paul, Ranjit K; Das, Sampa; Boruah, R K; Dutta, Amrit K; Das, Dilip K
2014-10-01
The present study reports the suitability of termite mounds as a bulking agent for composting with crop residues and cow dung in pit method. Use of 50 kg termite mound with the crop residues (stover of ground nut: 361.65 kg; soybean: 354.59 kg; potato: 357.67 kg and mustard: 373.19 kg) and cow dung (84.90 kg) formed a good quality compost within 70 days of composting having nitrogen, phosphorus and potassium as 20.19, 3.78 and 32.77 g kg(-1) respectively with a bulk density of 0.85 g cm(-3). Other physico-chemical and germination parameters of the compost were within Indian standard, which had been confirmed by the application of multivariate analysis of variance and multivariate contrast analysis. Principal component analysis was applied in order to gain insight into the characteristic variables. Four composting treatments formed two different groups when hierarchical cluster analysis was applied. Copyright © 2014 Elsevier Ltd. All rights reserved.
[Comparison of wear resistance and flexural strength of three kinds of bulk-fill composite resins].
Zhang, Huan; Zhang, Meng-Long; Qiu, Li-Hong; Yu, Jing-Tao; Zhan, Fu-Liang
2016-06-01
To compare the abrasion resistance and flexure strength of three bulk-fill resin composites with an universal nano-hybrid composite resins. The specimens were prepared with three kinds of bulk fill composites (SDR , sonicfill, Tetric N-Ceram Bulk Fill) and an universal nano-hybrid composite resins(Herculite Precis). 10 mm in diameter × 2mm in height specimens were prepared for abrasion resistance, while 2 mm in width × 2 mm in depth×25 mm in length specimens were prepared for flexure strength. The specimens were mounted in a bal1-on-disc wear testing machine and abraded with the media artificial saliva(50 N loads, 10000 cycles).Flexural test was performed with an Universal Testing Machine at a cross-head speed of 1mm/min. One-way variance analysis was used to determine the statistical differences of volume loss and flexural strength among groups with SPSS 13.0 software package(P<0.05). The volume loss was as follows: SDR (1.2433±0.11) mm3
46 CFR 148.115 - Report of incidents.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Report of incidents. 148.115 Section 148.115 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DANGEROUS CARGOES CARRIAGE OF BULK SOLID MATERIALS THAT REQUIRE SPECIAL HANDLING Minimum Transportation Requirements § 148.115 Report of incidents. (a) When a fire or other hazardous condition...
Evidence of an Intermediate Phase in bulk alloy oxide glass sysem
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Boolchand, P.
2011-03-01
Reversibility windows have been observed in modified oxides (alkali-silicates and -germanates) and identified with Intermediate Phases(IPs). Here we find preliminary evidence of an IP in a ternary oxide glass, (B2 O3)5 (Te O2)95-x (V2O5)x , which is composed of network formers. Bulk glasses are synthesized across the 18% x 35 % composition range, and examined in Raman scattering, modulated DSC and molar volume experiments. Glass transition temperatures Tg (x) steadily decrease with V2O5 content x, and reveal the enthalpy of relaxation at Tg to show a global minimum in the 24% x < 27 range, the reversibility window (IP). Molar volumes reveal a minimum in this window. Raman scattering reveals a Boson mode, and at least six other vibrational bands in the 100cm-1 < ν < 1700cm-1 range. Compositional trends in vibrational mode strengths and frequency are established. These results will be presented in relation to glass structure evolution with vanadia content and the underlying elastic phases. Supported by NSF grant DMR 08-53957.
Gonzalo, C; Carriedo, J A; Beneitez, E; Juárez, M T; De La Fuente, L F; San Primitivo, F
2006-02-01
A total of 9,353 records for bulk tank total bacterial count (TBC) were obtained over 1 yr from 315 dairy ewe flocks belonging to the Sheep Improvement Consortium (CPO) in Castilla-León (Spain). Analysis of variance showed significant effects of flock, breed, month within flock, dry therapy, milking type and installation, and logSCC on logTBC. Flock and month within flock were important variation factors as they accounted for 22.0 and 22.1% of the variance, respectively. Considerable repeatability values were obtained for both random factors. Hand milking and bucket-milking machines elicited highest logTBC (5.31), whereas parlor systems with looped milkline (5.01) elicited the lowest logTBC. The implementation of dry therapy practice (5.12) showed significantly lower logTBC than when not used (5.25). Variability in logTBC among breeds ranged from 5.24 (Awassi) to 5.07 (Churra). However, clinical outbreaks of contagious agalactia did not increase TBC significantly. A statistically significant relationship was found between logTBC and logSCC, the correlation coefficient between the variables being r = 0.23. Programs for improving milk hygiene should be implemented for both total bacterial count and somatic cell count variables at the same time.
Miletic, Vesna; Peric, Dejan; Milosevic, Milos; Manojlovic, Dragica; Mitrovic, Nenad
2016-11-01
To compare strain and displacement of sculptable bulk-fill, low-shrinkage and conventional composites as well as dye penetration along the dentin-restoration interface. Modified Class II cavities (N=5/group) were filled with sculptable bulk-fill (Filtek Bulk Fill Posterior, 3M ESPE; Tetric EvoCeram Bulk Fill, Ivoclar Vivadent; fiber-reinforced EverX Posterior, GC; giomer Beautifil Bulk, Schofu), low-shrinkage (Kalore, GC), nanohybrid (Tetric EvoCeram, Ivoclar Vivadent) or microhybrid (Filtek Z250, 3M ESPE) composites. Strain and displacement were determined using the 3D digital image correlation method based on two cameras with 1μm displacement sensitivity and 1600×1200 pixel resolution (Aramis, GOM). Microleakage along dentin axial and gingival cavity walls was measured under a stereomicroscope using a different set of teeth (N=8/group). Data were analyzed using analyses of variance with Tukey's post-test, Pearson correlation and paired t-test (α=0.05). Strain of TEC Bulk, Filtek Bulk, Beautifil Bulk and Kalore was in the range of 1-1.5%. EverX and control composites showed 1.5-2% strain. Axial displacements were between 5μm and 30μm. The least strain was identified at 2mm below the occlusal surface in 4-mm but not in 2-mm layered composites. Greater microleakage occurred along the gingival than axial wall (p<0.05). No correlation was found between strain/displacements and microleakage axially (r 2 =0.082, p=0.821; r 2 =-0.2, p=0.605, respectively) or gingivally (r 2 =-0.126, p=0.729, r 2 =-0.278, p=0.469, respectively). Strain i.e. volumetric shrinkage of sculptable bulk-fill and low-shrinkage composites was comparable to control composites but strain distribution across restoration depth differed. Marginal integrity was more compromised along the gingival than axial dentin wall. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Bulk-wave ultrasonic propagation imagers
NASA Astrophysics Data System (ADS)
Abbas, Syed Haider; Lee, Jung-Ryul
2018-03-01
Laser-based ultrasound systems are described that utilize the ultrasonic bulk-wave sensing to detect the damages and flaws in the aerospace structures. These systems apply pulse-echo or through transmission methods to detect longitudinal through-the-thickness bulk-waves. These thermoelastic waves are generated using Q-switched laser and non-contact sensing is performed using a laser Doppler vibrometer (LDV). Laser-based raster scanning is performed by either twoaxis translation stage for linear-scanning or galvanometer-based laser mirror scanner for angular-scanning. In all ultrasonic propagation imagers, the ultrasonic data is captured and processed in real-time and the ultrasonic propagation can be visualized during scanning. The scanning speed can go up to 1.8 kHz for two-axis linear translation stage based B-UPIs and 10 kHz for galvanometer-based laser mirror scanners. In contrast with the other available ultrasound systems, these systems have the advantage of high-speed, non-contact, real-time, and non-destructive inspection. In this paper, the description of all bulk-wave ultrasonic imagers (B-UPIs) are presented and their advantages are discussed. Experiments are performed with these system on various structures to proof the integrity of their results. The C-scan results produced from non-dispersive, through-the-thickness, bulk-wave detection show good agreement in detection of structural variances and damage location in all inspected structures. These results show that bulk-wave UPIs can be used for in-situ NDE of engineering structures.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
NASA Astrophysics Data System (ADS)
Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan
2017-12-01
Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.
Solid-State Explosive Reaction for Nanoporous Bulk Thermoelectric Materials.
Zhao, Kunpeng; Duan, Haozhi; Raghavendra, Nunna; Qiu, Pengfei; Zeng, Yi; Zhang, Wenqing; Yang, Jihui; Shi, Xun; Chen, Lidong
2017-11-01
High-performance thermoelectric materials require ultralow lattice thermal conductivity typically through either shortening the phonon mean free path or reducing the specific heat. Beyond these two approaches, a new unique, simple, yet ultrafast solid-state explosive reaction is proposed to fabricate nanoporous bulk thermoelectric materials with well-controlled pore sizes and distributions to suppress thermal conductivity. By investigating a wide variety of functional materials, general criteria for solid-state explosive reactions are built upon both thermodynamics and kinetics, and then successfully used to tailor material's microstructures and porosity. A drastic decrease in lattice thermal conductivity down below the minimum value of the fully densified materials and enhancement in thermoelectric figure of merit are achieved in porous bulk materials. This work demonstrates that controlling materials' porosity is a very effective strategy and is easy to be combined with other approaches for optimizing thermoelectric performance. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Wang, X.; Robertson, S. H.; Horanyi, M.; NASA Lunar Science Institute: Colorado CenterLunar Dust; Atmospheric Studies
2011-12-01
The Moon does not have a global magnetic field, unlike the Earth, rather it has strong crustal magnetic anomalies. Data from Lunar Prospector and SELENE (Kaguya) observed strong interactions between the solar wind and these localized magnetic fields. In the laboratory, a configuration of a horseshoe permanent magnet below an insulating surface is used as an analogue of lunar crustal magnetic anomalies. Plasmas are created above the surface by a hot filament discharge. Potential distributions are measured with an emissive probe and show complex spatial structures. In our experiments, electrons are magnetized with gyro-radii r smaller than the distance from the surface d (r < d) and ions are un-magnetized with r > d. Unlike negative charging on surfaces with no magnetic fields, the surface potential at the center of the magnetic dipole is found close to the plasma bulk potential. The surface charging is dominated by the cold unmagnetized ions, while the electrons are shielded away. A potential minimum is formed between the center of the surface and the bulk plasma, most likely caused by the trapped electrons between the two magnetic mirrors at the cusps. The value of the potential minimum with respect to the bulk plasma potential decreases with increasing plasma density and neutral pressure, indicating that the mirror-trapped electrons are scattered by electron-electron and electron-neutral collisions. The potential at the two cusps are found to be more negative due to the electrons following the magnetic field lines onto the surface.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
ERIC Educational Resources Information Center
Johnson, Jim
2017-01-01
A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Modelling health and output at business cycle horizons for the USA.
Narayan, Paresh Kumar
2010-07-01
In this paper we employ a theoretical framework - a simple macro model augmented with health - that draws guidance from the Keynesian view of business cycles to examine the relative importance of permanent and transitory shocks in explaining variations in health expenditure and output at business cycle horizons for the USA. The variance decomposition analysis of shocks reveals that at business cycle horizons permanent shocks explain the bulk of the variations in output, while transitory shocks explain the bulk of the variations in health expenditures. We undertake a shock decomposition analysis for private health expenditures versus public health expenditures and interestingly find that while transitory shocks are more important for private sector expenditures, permanent shocks dominate public health expenditures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Crystal growth and optical properties of 4-aminobenzophenone (ABP)
NASA Astrophysics Data System (ADS)
Li, Zhengdong; Wu, Baichang; Su, Genbo; Huang, Gongfan
1997-02-01
Bulk crystals of 4-aminobenzophenone (ABP) were grown from organic solution. The crystal structure was determined by X-ray analysis. The refractive indices were determined by the method of prism minimum deviation. Some effective nonlinear-optical coefficients deff were measured. A blue second-harmonic emission with wavelengths of 433 and 460 nm were observed during laser diode pumping.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
Maghaireh, G A; Price, R B; Abdo, N; Taha, N A; Alzraikat, H
2018-06-28
This study compared light transmission through different thicknesses of bulk-fill resin-based composites (RBCs) using a polywave and a single-peak light-emitting diode light-curing unit (LCU). The effect on the surface hardness was also evaluated. Five bulk-fill RBCs were tested. Specimens (n=5) of 1-, 2-, 4-, or 6-mm thickness were photopolymerized for 10 seconds from the top using a polywave (Bluephase Style) or single-peak (Elipar S10) LCU, while a spectrophotometer monitored in real time the transmitted irradiance and radiant exposure reaching the bottom of the specimen. After 24 hours of storage in distilled water at 37°C, the Vickers microhardness (VH) was measured at top and bottom. Results were analyzed using multiple-way analysis of variance, Tukey post hoc tests, and multivariate analysis (α=0.05). The choice of LCU had no significant effect on the total amount of light transmitted through the five bulk-fill RBCs at each thickness. There was a significant decrease in the amount of light transmitted as the thickness increased for all RBCs tested with both LCUs ( p<0.001). Effect of LCU on VH was minimal (η p 2 =0.010). The 1-, 2-, and 4-mm-thick specimens of SDR, X-tra Fill, and Filtek Bulk Restorative achieved a VH bottom/top ratio of approximately 80% when either LCU was used. The total amount of light transmitted through the five bulk-fill RBCs was similar at the different thicknesses using either LCU. The polywave LCU used in this study did not enhance the polymerization of the tested bulk-fill RBCs when compared with the single-peak LCU.
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi
2002-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Technical Reports Server (NTRS)
Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
Selection of a surface tension propellant management system for the Viking 75 Orbiter.
NASA Technical Reports Server (NTRS)
Dowdy, M. W.; Debrock, S. C.
1972-01-01
Discussion of the propellant management system requirements derived for the Viking 75 mission, and review of a series of surface tension propellant management system design concepts. The chosen concept is identified and its mission operation described. The ullage bubble and bulk liquid positioning characteristics are presented, along with propellant dynamic considerations entailed by thrust initiation/termination. Pressurization design considerations, required to assure minimum disturbance to the bulk propellant, are introduced as well as those of the tank ullage vent. Design provisions to assure liquid communication between tank ends are discussed. Results of a preliminary design study are presented, including mechanical testing requirements to assure structural integrity, propellant compatibility, and proper installation.
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Experimental demonstration of quantum teleportation of a squeezed state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro
2005-10-15
Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Friedman, B.; Link, M.; Farmer, D.
2016-12-01
We use an oxidative flow reactor (OFR) to determine the secondary organic aerosol (SOA) yields of five monoterpenes (alpha-pinene, beta-pinene, limonene, sabinene, and terpinolene) at a range of OH exposures. These OH exposures correspond to aging timescales of a few hours to seven days. We further determine how SOA yields of beta-pinene and alpha-pinene vary as a function of seed particle type (organic vs. inorganic) and seed particle mass concentration. We hypothesize that the monoterpene structure largely accounts for the observed variance in SOA yields for the different monoterpenes. We also use high-resolution time-of-flight chemical ionization mass spectrometry to calculate the bulk gas-phase properties (O:C and H:C) of the monoterpene oxidation systems as a function of oxidant concentrations. Bulk gas-phase properties can be compared to the SOA yields to assess the capability of the precursor gas-phase species to inform the SOA yields of each monoterpene oxidation system. We find that the extent of oxygenated precursor gas-phase species corresponds to SOA yield.
Full in-vitro analyses of new-generation bulk fill dental composites cured by halogen light.
Tekin, Tuçe Hazal; Kantürk Figen, Aysel; Yılmaz Atalı, Pınar; Coşkuner Filiz, Bilge; Pişkin, Mehmet Burçin
2017-08-01
The objective of this study was to investigate the full in-vitro analyses of new-generation bulk-fill dental composites cured by halogen light (HLG). Two types' four composites were studied: Surefill SDR (SDR) and Xtra Base (XB) as bulk-fill flowable materials; QuixFill (QF) and XtraFill (XF) as packable bulk-fill materials. Samples were prepared for each analysis and test by applying the same procedure, but with different diameters and thicknesses appropriate to the analysis and test requirements. Thermal properties were determined by thermogravimetric analysis (TG/DTG) and differential scanning calorimetry (DSC) analysis; the Vickers microhardness (VHN) was measured after 1, 7, 15 and 30days of storage in water. The degree of conversion values for the materials (DC, %) were immediately measured using near-infrared spectroscopy (FT-IR). The surface morphology of the composites was investigated by scanning electron microscopes (SEM) and atomic-force microscopy (AFM) analyses. The sorption and solubility measurements were also performed after 1, 7, 15 and 30days of storage in water. In addition to his, the data were statistically analyzed using one-way analysis of variance, and both the Newman Keuls and Tukey multiple comparison tests. The statistical significance level was established at p<0.05. According to the ISO 4049 standards, all the tested materials showed acceptable water sorption and solubility, and a halogen light source was an option to polymerize bulk-fill, resin-based dental composites. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of Radiopacity of Bulk-fill Flowable Composites Using Digital Radiography.
Tarcin, B; Gumru, B; Peker, S; Ovecoglu, H S
2016-01-01
New flowable composites that may be bulk-filled in layers up to 4 mm are indicated as a base beneath posterior composite restorations. Sufficient radiopacity is one of the several important requirements such materials should meet. The aim of this study was to evaluate the radiopacity of bulk-fill flowable composites and to provide a comparison with conventional flowable composites using digital imaging. Ten standard specimens (5 mm in diameter, 1 mm in thickness) were prepared from each of four different bulk-fill flowable composites and nine different conventional flowable composites. Radiographs of the specimens were taken together with 1-mm-thick tooth slices and an aluminum step wedge using a digital imaging system. For the radiographic exposures, a storage phosphor plate and a dental x-ray unit at 70 kVp and 8 mA were used. The object-to-focus distance was 30 cm, and the exposure time was 0.2 seconds. The gray values of the materials were measured using the histogram function of the software available with the system, and radiopacity was calculated as the equivalent thickness of aluminum. The data were analyzed statistically (p<0.05). All of the tested bulk-fill flowable composites showed significantly higher radiopacity values in comparison with those of enamel, dentin, and most of the conventional flowable composites (p<0.05). Venus Bulk Fill (Heraeus Kulzer) provided the highest radiopacity value, whereas Arabesk Flow (Voco) showed the lowest. The order of the radiopacity values for the bulk-fill flowable composites was as follows: Venus Bulk Fill (Heraeus Kulzer) ≥ X-tra Base (Voco) > SDR (Dentsply DeTrey) ≥ Filtek Bulk Fill (3M ESPE). To conclude, the bulk-fill flowable restorative materials, which were tested in this study using digital radiography, met the minimum standard of radiopacity specified by the International Standards Organization.
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.
2014-01-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180
Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E
2014-04-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
A New Look at Some Solar Wind Turbulence Puzzles
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2006-01-01
Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.
Size dependent compressibility of nano-ceria: Minimum near 33 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodenbough, Philip P.; Chemistry Department, Columbia University, New York, New York 10027; Song, Junhua
2015-04-20
We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite sizemore » decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.« less
The flux jumps in high Tc Bi(1.7)Pb(0.3)Sr2 Ca2Cu3O(y) bulk superconductor
NASA Astrophysics Data System (ADS)
Cao, Xiaowen; Huang, Sunli
1989-11-01
There were giant flux jumps in high T sub c Bi(1.7)Pb(0.3)Sr2Ca2Cu3O(v) bulk superconductor. The relaxation time, tau, decreased with both the increase of magnetic field and the rise of temperature. The maximum tau was about 40 min. The average -dM/dt increased with both the increase of magnetic field and the rise of temperature. The minimum average -dM/dt was about 4.1 x 10(exp -2) G/min. The flux jump weakened with time. It was dependent on the decrease of gradient of magnetic flux density dn/dx in the sample.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2014-01-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986
Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland
Pluhowski, E.J.
1981-01-01
Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
NASA Astrophysics Data System (ADS)
Roy Chowdhury, Prabudhya; Vikram, Ajit; Phillips, Ryan K.; Hoorfar, Mina
2016-07-01
The gas diffusion layer (GDL) is a thin porous layer sandwiched between a bipolar plate (BPP) and a catalyst coated membrane in a fuel cell. Besides providing passage for water and gas transport from and to the catalyst layer, it is responsible for electron and heat transfer from and to the BPP. In this paper, a method has been developed to measure the GDL bulk thermal conductivity and the contact resistance at the GDL/BPP interface under inhomogeneous compression occurring in an actual fuel cell assembly. Toray carbon paper GDL TGP-H-060 was tested under a range of compression pressure of 0.34 to 1.71 MPa. The results showed that the thermal contact resistance decreases non-linearly (from 3.8 × 10-4 to 1.17 × 10-4 Km2 W-1) with increasing pressure due to increase in microscopic contact area between the GDL and BPP; while the effective bulk thermal conductivity increases (from 0.56 to 1.42 Wm-1 K-1) with increasing the compression pressure. The thermal contact resistance was found to be greater (by a factor of 1.6-2.8) than the effective bulk thermal resistance for all compression pressure ranges applied here. This measurement technique can be used to identify optimum GDL based on minimum bulk and contact resistances measured under inhomogeneous compression.
NASA Astrophysics Data System (ADS)
Dasgupta, Rajdeep; Hirschmann, Marc M.; Dellas, Nikki
2005-05-01
To explore the effect of bulk composition on the solidus of carbonated eclogite, we determined near-solidus phase relations at 3 GPa for four different nominally anhydrous, carbonated eclogites. Starting materials (SLEC1, SLEC2, SLEC3, and SLEC4) were prepared by adding variable proportions and compositions of carbonate to a natural eclogite xenolith (66039B) from Salt Lake crater, Hawaii. Near-solidus partial melts for all bulk compositions are Fe Na calcio-dolomitic and coexist with garnet + clinopyroxene + ilmenite ± calcio-dolomitic solid solution. The solidus for SLEC1 (Ca#=100 × molar Ca/(Ca + Mg + FeT)=32, 1.63 wt% Na2O, and 5 wt% CO2) is bracketed between 1,050°C and 1,075°C (Dasgupta et al. in Earth Planet Sci Lett 227:73 85, 2004), whereas initial melting for SLEC3 (Ca# 41, 1.4 wt% Na2O, and 4.4 wt% CO2) is between 1,175°C and 1,200°C. The solidus for SLEC2 (Ca# 33, 1.75 wt% Na2O, and 15 wt% CO2) is estimated to be near 1,100°C and the solidus for SLEC3 (Ca# 37, 1.47 wt% Na2O, and 2.2 wt% CO2) is between 1,100°C and 1,125°C. Solidus temperatures increase with increasing Ca# of the bulk, owing to the strong influence of the calcite magnesite binary solidus-minimum on the solidus of carbonate bearing eclogite. Bulk compositions that produce near-solidus crystalline carbonate closer in composition to the minimum along the CaCO3-MgCO3 join have lower solidus temperatures. Variations in total CO2 have significant effect on the solidus if CO2 is added as CaCO3, but not if CO2 is added as a complex mixture that maintains the cationic ratios of the bulk-rock. Thus, as partial melting experiments necessarily have more CO2 than that likely to be found in natural carbonated eclogites, care must be taken to assure that the compositional shifts associated with excess CO2 do not unduly influence melting behavior. Near-solidus dolomite and calcite solid solutions have higher Ca/(Ca + Mg) than bulk eclogite compositions, owing to Ca Mg exchange equilibrium between carbonates and silicates. Carbonates in natural mantle eclogite, which have low bulk CO2 concentration, will have Ca/Mg buffered by reactions with silicates. Consequently, experiments with high bulk CO2 may not mimic natural carbonated eclogite phase equilibria unless care is taken to ensure that CO2 enrichment does not result in inappropriate equilibrium carbonate compositions. Compositions of eclogite-derived carbonate melt span the range of natural carbonatites from oceanic and continental settings. Ca#s of carbonatitic partial melts of eclogite vary significantly and overlap those of partial melts of carbonated lherzolite, however, for a constant Ca-content, Mg# of carbonatites derived from eclogitic sources are likely to be lower than the Mg# of those generated from peridotite.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Response to selection while maximizing genetic variance in small populations.
Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E
2016-09-20
Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.
NASA Astrophysics Data System (ADS)
Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.
2016-05-01
The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
McHugh, Lauren E J; Politi, Ioanna; Al-Fodeh, Rami S; Fleming, Garry J P
2017-09-01
To assess the cuspal deflection of standardised large mesio-occluso-distal (MOD) cavities in third molar teeth restored using conventional resin-based composite (RBC) or their bulk fill restorative counterparts compared with the unbound condition using a twin channel deflection measuring gauge. Following thermocycling, the cervical microleakage of the restored teeth was assessed to determine marginal integrity. Standardised MOD cavities were prepared in forty-eight sound third molar teeth and randomly allocated to six groups. Restorations were placed in conjunction with (and without) a universal bonding system and resin restorative materials were irradiated with a light-emitting-diode light-curing-unit. The dependent variable was the restoration protocol, eight oblique increments for conventional RBCs or two horizontal increments for the bulk fill resin restoratives. The cumulative buccal and palatal cuspal deflections from a twin channel deflection measuring gauge were summed, the restored teeth thermally fatigued, immersed in 0.2% basic fuchsin dye for 24h, sectioned and examined for cervical microleakage score. The one-way analysis of variance (ANOVA) identified third molar teeth restored using conventional RBC materials had significantly higher mean total cuspal deflection values compared with bulk fill resin restorative restoration (all p<0.0001). For the conventional RBCs, Admira Fusion (bonded) third molar teeth had significantly the lowest microleakage scores (all p<0.001) while the Admira Fusion x-tra (bonded) bulk fill resin restored teeth had significantly the lowest microleakage scores compared with Tetric EvoCeram Bulk Fill (bonded and non-bonded) teeth (all p<0.001). Not all conventional RBCs or bulk fill resin restoratives behave in a similar manner when used to restore standardised MOD cavities in third molar teeth. It would appear that light irradiation of individual conventional RBCs or bulk fill resin restoratives may be problematic such that material selection is vital in the absence of clinical data. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Wen, Li; Li, Dejun; Chen, Hao; Wang, Kelin
2017-10-01
Agricultural abandonment has been proposed as an effective way to enhance soil organic carbon (SOC) sequestration. Nevertheless, SOC sequestration in the long term is largely determined by whether the stable SOC fractions will increase. Here the dynamics of SOC fractions during post-agricultural succession were investigated in a karst region, southwest China using a space-for-time substitution approach. Cropland, grassland, shrubland and secondary forest were selected from areas underlain by dolomite and limestone, respectively. Density fractionation was used to separate bulk SOC into free light fraction (FLFC) and heavy fraction (HFC). FLFC contents were similar over dolomite and limestone, but bulk SOC and HFC contents were greater over limestone than over dolomite. FLFC content in the forest was greater than in the other vegetation types, but bulk SOC and HFC contents increased from the cropland through to the forest for areas underlain by dolomite. The contents of bulk SOC and its fractions were similar among the four vegetation types over limestone. The proportion of FLFC in bulk SOC was higher over dolomite than over limestone, but the case was inverse for the proportion of HFC, indicating SOC over limestone was more stable. However, the proportions of both FLFC and HFC were similar among the four vegetation types, implying that SOC stability was not changed by cropland conversion. Exchangeable calcium explained most of the variance of HFC content. Our study suggests that lithology not only affects SOC content and its stability, but modulates the dynamics of SOC fractions during post-agricultural succession. Copyright © 2017 Elsevier Ltd. All rights reserved.
Magnetic Properties of Nanoparticle Matrix Composites
2015-06-02
recording materials with large value of Ku are SmCo5 with Ku = 11-20 x 10 7 erg/cm 3 for the minimum stable particle size of 2.45 nm, FePt with Ku...nanoparticles and the matrix compared with the bulk behavior of the soft and hard phases and ferromagnetic coupling. 15. SUBJECT TERMS...Magnetic materials , Ab initio methods, nanoparticles, Nanocomposites, Ferromagnetics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT
Cabling design for phased arrays
NASA Technical Reports Server (NTRS)
Kruger, I. D.; Turkiewicz, L.
1972-01-01
The ribbon-cabling system used for the AEGIS phased array which provides minimum cable bulk, complete EMI shielding, rugged mechanical design, repeatable electrical characteristics, and ease of assembly and maintenance is described. The ribbon cables are 0.040-inch thick, and in widths up to 2 1/2 inches. Their terminations are molded connectors that can be grouped in a three-tier arrangement, with cable branching accomplished by a matrix-welding technique.
Thermodynamic properties by Equation of state of liquid sodium under pressure
NASA Astrophysics Data System (ADS)
Li, Huaming; Sun, Yongli; Zhang, Xiaoxiao; Li, Mo
Isothermal bulk modulus, molar volume and speed of sound of molten sodium are calculated through an equation of state of a power law form within good precision as compared with the experimental data. The calculated internal energy data show the minimum along the isothermal lines as the previous result but with slightly larger values. The calculated values of isobaric heat capacity show the unexpected minimum in the isothermal compression. The temperature and pressure derivative of various thermodynamic quantities in liquid Sodium are derived. It is discussed about the contribution from entropy to the temperature and pressure derivative of isothermal bulk modulus. The expressions for acoustical parameter and nonlinearity parameter are obtained based on thermodynamic relations from the equation of state. Both parameters for liquid Sodium are calculated under high pressure along the isothermal lines by using the available thermodynamic data and numeric derivations. By comparison with the results from experimental measurements and quasi-thermodynamic theory, the calculated values are found to be very close at melting point at ambient condition. Furthermore, several other thermodynamic quantities are also presented. Scientific Research Starting Foundation from Taiyuan university of Technology, Shanxi Provincial government (``100-talents program''), China Scholarship Council and National Natural Science Foundation of China (NSFC) under Grant No. 11204200.
Influence of mixing conditions on the rheological properties and structure of capillary suspensions
Bossler, Frank; Weyrauch, Lydia; Schmidt, Robert; Koos, Erin
2017-01-01
The rheological properties of a suspension can be dramatically altered by adding a small amount of a secondary fluid that is immiscible with the bulk liquid. These capillary suspensions exist either in the pendular state where the secondary fluid preferentially wets the particles or the capillary state where the bulk fluid is preferentially wetting. The yield stress, as well as storage and loss moduli, depends on the size and distribution of secondary phase droplets created during sample preparation. Enhanced droplet breakup leads to stronger sample structures. In capillary state systems, this can be achieved by increasing the mixing speed and time of turbulent mixing using a dissolver stirrer. In the pendular state, increased mixing speed also leads to better droplet breakup, but spherical agglomeration is favored at longer times decreasing the yield stress. Additional mixing with a ball mill is shown to be beneficial to sample strength. The influence of viscosity variance between the bulk and second fluid on the droplet breakup is excluded by performing experiments with viscosity-matched fluids. These experiments show that the capillary state competes with the formation of Pickering emulsion droplets and is often more difficult to achieve than the pendular state. PMID:28194044
Plasma dynamics on current-carrying magnetic flux tubes
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1992-01-01
A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
Effect of alcohol on the structure of cytochrome C: FCS and molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Amin, Md. Asif; Halder, Ritaban; Ghosh, Catherine; Jana, Biman; Bhattacharyya, Kankan
2016-12-01
Effect of ethanol on the size and structure of a protein cytochrome C (Cyt C) is investigated using fluorescence correlation spectroscopy (FCS) and molecular dynamics (MD) simulations. For FCS studies, Cyt C is covalently labeled with a fluorescent probe, alexa 488. FCS studies indicate that on addition of ethanol, the size of the protein varies non-monotonically. The size of Cyt C increases (i.e., the protein unfolds) on addition of alcohol (ethanol) up to a mole fraction of 0.2 (44.75% v/v) and decreases at higher alcohol concentration. In order to provide a molecular origin of this structural transition, we explore the conformational free energy landscape of Cyt C as a function of radius of gyration (Rg) at different compositions of water-ethanol binary mixture using MD simulations. Cyt C exhibits a minimum at Rg ˜ 13 Å in bulk water (0% alcohol). Upon increasing ethanol concentration, a second minimum appears in the free energy surface with gradually larger Rg up to χEtOH ˜ 0.2 (44.75% v/v). This suggests gradual unfolding of the protein. At a higher concentration of alcohol (χEtOH > 0.2), the minimum at large Rg vanishes, indicating compaction. Analysis of the contact map and the solvent organization around protein indicates a preferential solvation of the hydrophobic residues by ethanol up to χEtOH = 0.2 (44.75% v/v) and this causes the gradual unfolding of the protein. At high concentration (χEtOH = 0.3 (58% v/v)), due to structural organization in bulk water-ethanol binary mixture, the extent of preferential solvation by ethanol decreases. This causes a structural transition of Cyt C towards a more compact state.
NASA Astrophysics Data System (ADS)
Li, Zhi; Jin, Jiming
2017-11-01
Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.
Crow, James F
2008-12-01
Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy is yet to be determined. It is likely that only a case-by-case analysis will provide the answers. Despite the difficulties that complex interactions cause for evolution in Mendelian populations, such populations nevertheless evolve very well. Longlasting species must have evolved mechanisms for coping with such problems. Since such difficulties do not arise in asexual populations, a comparison of epistatic patterns in closely related sexual and asexual species might provide some important insights.
Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.
Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S
2004-01-01
StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).
Disruption rates for one vulnerable soil in Organ Pipe Cactus National Monument, Arizona, USA
Webb, Robert H.; Esque, Todd C.; Nussear, Kenneth E.; Sturm, Mark
2013-01-01
Rates of soil disruption from hikers and vehicle traffic are poorly known, particularly for arid landscapes. We conducted an experiment in Organ Pipe Cactus National Monument (ORPI) in western Arizona, USA, on an air-dry very fine sandy loam that is considered to be vulnerable to disruption. We created variable-pass tracks using hikers, an all-terrain vehicle (ATV), and a four-wheel drive vehicle (4WD) and measured changes in cross-track topography, penetration depth, and bulk density. Hikers (one pass = 5 hikers) increased bulk density and altered penetration depth but caused minimal surface disruption up to 100 passes; a minimum of 10 passes were required to overcome surface strength of this dry soil. Both ATV and 4WD traffic significantly disrupted the soil with one pass, creating deep ruts with increasing passes that rendered the 4WD trail impassable after 20 passes. Despite considerable soil loosening (dilation), bulk density increased in the vehicle trails, and lateral displacement created berms of loosened soil. This soil type, when dry, can sustain up to 10 passes of hikers but only one vehicle pass before significant soil disruption occurs; greater disruption is expected when soils are wet. Bulk density increased logarithmically with applied pressure from hikers, ATV, and 4WD.
Efficiency of polymerization of bulk-fill composite resins: a systematic review.
Reis, André Figueiredo; Vestphal, Mariana; Amaral, Roberto Cesar do; Rodrigues, José Augusto; Roulet, Jean-François; Roscoe, Marina Guimarães
2017-08-28
This systematic review assessed the literature to evaluate the efficiency of polymerization of bulk-fill composite resins at 4 mm restoration depth. PubMed, Cochrane, Scopus and Web of Science databases were searched with no restrictions on year, publication status, or article's language. Selection criteria included studies that evaluated bulk-fill composite resin when inserted in a minimum thickness of 4 mm, followed by curing according to the manufacturers' instructions; presented sound statistical data; and comparison with a control group and/or a reference measurement of quality of polymerization. The evidence level was evaluated by qualitative scoring system and classified as high-, moderate- and low- evidence level. A total of 534 articles were retrieved in the initial search. After the review process, only 10 full-text articles met the inclusion criteria. Most articles included (80%) were classified as high evidence level. Among several techniques, microhardness was the most frequently method performed by the studies included in this systematic review. Irrespective to the "in vitro" method performed, bulk fill RBCs were partially likely to fulfill the important requirement regarding properly curing in 4 mm of cavity depth measured by depth of cure and / or degree of conversion. In general, low viscosities BFCs performed better regarding polymerization efficiency compared to the high viscosities BFCs.
Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells
2017-11-01
approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of
2017-12-01
carefully to ensure only minimum information needed for effective management control is requested. Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey
2004-05-10
aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well
NASA Astrophysics Data System (ADS)
Marques, Márcia T. A.; Moreira, Gregori de A.; Pinero, Maciel; Oliveira, Amauri P.; Landulfo, Eduardo
2018-04-01
This study aims to compare the planetary boundary layer height (PBLH) values estimated by radiosonde data through the bulk Richardson number (BRN) method and by Doppler lidar measurements through the Carrier to Noise Ratio (CNR) method, which corresponds to the maximum of the variance of CNR profile. The measurement campaign was carried during the summer of 2015/2016 in the city of São Paulo. Despite the conceptual difference between these methods, the results show great agreement between them.
NASA Astrophysics Data System (ADS)
Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang
2018-03-01
This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Demographics of an ornate box turtle population experiencing minimal human-induced disturbances
Converse, S.J.; Iverson, J.B.; Savidge, J.A.
2005-01-01
Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.
Topuz, Osman Kadir; Özvural, Emin Burçin; Zhao, Qin; Huang, Qingrong; Chikindas, Michael; Gölükçü, Muharrem
2016-07-15
The purpose of this research was to investigate antimicrobial effects of nano emulsions of anise oil (AO) on the survival of common food borne pathogens, Listeria monocytogenes and Escherichia coli O157:H7. Series of emulsions containing different level of anise oil as potential antimicrobial delivery systems were prepared. Antimicrobial activities of bulk anise oil and its emulsions (coarse and nano) was tested by the minimum inhibitory concentration and time kill assay. Our results showed that bulk anise oil reduced the population of E. coli O157:H7 and L. monocytogenes by 1.48 and 0.47 log cfu/ml respectively after 6 h of contact time. However, under the same condition anise oil nanoemulsion (AO75) reduced E. coli O157:H7 and L. monocytogenes count by 2.51 and 1.64 log cfu/ml, respectively. Physicochemical and microbial analyses indicated that both nano and coarse emulsions of anise oil showed better and long-term physicochemical stability and antimicrobial activity compared to bulk anise oil. Copyright © 2016 Elsevier Ltd. All rights reserved.
Oil point and mechanical behaviour of oil palm kernels in linear compression
NASA Astrophysics Data System (ADS)
Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi
2017-07-01
The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.
NASA Astrophysics Data System (ADS)
Inhofer, A.; Duffy, J.; Boukhicha, M.; Bocquillon, E.; Palomo, J.; Watanabe, K.; Taniguchi, T.; Estève, I.; Berroir, J. M.; Fève, G.; Plaçais, B.; Assaf, B. A.
2018-02-01
A metal-dielectric topological-insulator capacitor device based on hexagonal-boron-nitrate- (h -BN) encapsulated CVD-grown Bi2Se3 is realized and investigated in the radio-frequency regime. The rf quantum capacitance and device resistance are extracted for frequencies as high as 10 GHz and studied as a function of the applied gate voltage. The superior quality h -BN gate dielectric combined with the optimized transport characteristics of CVD-grown Bi2Se3 (n ˜1018 cm-3 in 8 nm) on h -BN allow us to attain a bulk depleted regime by dielectric gating. A quantum-capacitance minimum and a linear variation of the capacitance with the chemical potential are observed revealing a Dirac regime. The topological surface state in proximity to the gate is seen to reach charge neutrality, but the bottom surface state remains charged and capacitively coupled to the top via the insulating bulk. Our work paves the way toward implementation of topological materials in rf devices.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Bevill, T. J.
2003-01-01
The strategy of raking rock fragments from the lunar regolith as a means of acquiring representative samples has wide support due to science return, spacecraft simplicity (reliability) and economy [3, 4, 5]. While there exists widespread agreement that raking or sieving the bulk regolith is good strategy, there is lively discussion about the minimum sample size. Advocates of consor-tium studies desire fragments large enough to support petrologic and isotopic studies. Fragments from 5 to 10 mm are thought adequate [4, 5]. Yet, Jolliff et al. [6] demonstrated use of 2-4 mm fragments as repre-sentative of larger rocks. Here we make use of cura-torial records and sample catalogs to give a different perspective on minimum sample size for a robotic sample collector.
Process audits versus product quality monitoring of bulk milk.
Velthuis, A G J; van Asseldonk, M A P M
2011-01-01
Assessment of milk quality is based on bulk milk testing and farm certification on process quality audits. It is unknown to what extent dairy farm audits improve milk quality. A statistical analysis was conducted to quantify possible associations between bulk milk testing and dairy farm audits. The analysis comprised 64.373 audit outcomes on 26,953 dairy farms, which were merged with all conducted laboratory tests of bulk milk samples 12 mo before the audit. Each farm audit record included 271 binary checklist items and 52 attention point variables (given to farmers if serious deviations were observed), both indicating possible deviations from the desired farm situation. Test results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acid (FFA), and milk sediment (SED). Results show that numerous audit variables were related to bulk milk test results, although the goodness of fit of the models was generally low. Cow hygiene, clean cubicles, hygiene of milking parlor, and utility room were positively correlated with superior product quality, mainly with respect to SCC, TBC, BAB, FPD, FFA, and SED. Animal health or veterinary drugs management (i.e., drug treatment recording, marking of treated animals, and storage of veterinary drugs) related to SCC, FPD, FFA, and SED. The availability of drinking water was related to TBC, BAB, FFA, and SED, whereas maintenance of the milking equipment was related mainly to SCC, FPD, and FFA. In summary, bulk milk quality and farm audit outcomes are, to some degree, associated: if dairy farms are assessed negatively on specific audit aspects, the bulk milk quality is more likely to be inferior. However, the proportion of the total variance in milk test results explained by audits ranged between 4 and 13% (depending on the specific bulk milk test), showing that auditing dairy farms provides additional information but has a limited association with the outcome of a product quality control program. This study suggests that farm audits could be streamlined to include only relevant checklist items and that bulk milk quality monitoring could be used as a basis of selecting farms for more or less frequent audits. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
NASA Astrophysics Data System (ADS)
Cvikl, B.
2010-01-01
The closed solution for the internal electric field and the total charge density derived in the drift-diffusion approximation for the model of a single layer organic semiconductor structure characterized by the bulk shallow single trap-charge energy level is presented. The solutions for two examples of electric field boundary conditions are tested on room temperature current density-voltage data of the electron conducting aluminum/tris(8-hydroxyquinoline aluminum/calcium structure [W. Brütting et al., Synth. Met. 122, 99 (2001)] for which jexp∝Va3.4, within the interval of bias 0.4 V≤Va≤7. In each case investigated the apparent electron mobility determined at given bias is distributed within a given, finite interval of values. The bias dependence of the logarithm of their lower limit, i.e., their minimum values, is found to be in each case, to a good approximation, proportional to the square root of the applied electric field. On account of the bias dependence as incorporated in the minimum value of the apparent electron mobility the spatial distribution of the organic bulk electric field as well as the total charge density turn out to be bias independent. The first case investigated is based on the boundary condition of zero electric field at the electron injection interface. It is shown that for minimum valued apparent mobilities, the strong but finite accumulation of electrons close to the anode is obtained, which characterize the inverted space charge limited current (SCLC) effect. The second example refers to the internal electric field allowing for self-adjustment of its boundary values. The total electron charge density is than found typically to be of U shape, which may, depending on the parameters, peak at both or at either Alq3 boundary. It is this example in which the proper SCLC effect is consequently predicted. In each of the above two cases, the calculations predict the minimum values of the electron apparent mobility, which substantially exceed the corresponding published measurements. For this reason the effect of the drift term alone is additionally investigated. On the basis of the published empirical electron mobilities and the diffusion term revoked, it is shown that the steady state electron current density within the Al/Alq3 (97 nm)/Ca single layer organic structure may well be pictured within the drift-only interpretation of the charge carriers within the Alq3 organic characterized by the single (shallow) trap energy level. In order to arrive at this result, it is necessary that the nonzero electric field, calculated to exist at the electron injecting Alq3/Ca boundary, is to be appropriately accounted for in the computation.
Nagel, Katrin; Bishop, Nicholas E; Schlegel, Ulf J; Püschel, Klaus; Morlock, Michael M
2017-02-01
The strength of the cement-bone interface in tibial component fixation depends on the morphology of the cement mantle. The purpose of this study was to identify thresholds of cement morphology parameters to maximize fixation strength using a minimum amount of cement. Twenty-three cadaveric tibiae were analyzed that had been implanted with tibial trays in previous studies and for which the pull-out strength of the tray had been measured. Specimens were separated into a group failing at the cement-bone interface (INTERFACE) and one failing in the bulk bone (BULK). Maximum pull-out strength corresponds to the ultimate strength of the bulk bone if the cement-bone interface is sufficiently strong. 3D models of the cement mantle in situ were reconstructed from computed tomography scans. The influences of bone mineral density and 6 cement morphology parameters (reflecting cement penetration, bone-cement interface, cement volume) on pull-out strength of the BULK group were determined using multiple regression analysis. The threshold of each parameter for classification of the specimens into either group was determined using receiver operating characteristic analysis. Cement penetration exceeding a mean of 1.1 mm or with a maximum of 5.6 mm exclusively categorized all BULK bone failure specimens. Failure strength of BULK failure specimens increased with bone mineral density (R 2 = 0.67, P < .001) but was independent of the cement morphology parameters. To maximize fixation strength, a mean cement penetration depth of at least 1.1 mm should be achieved during tibial tray cementing. Copyright © 2016 Elsevier Inc. All rights reserved.
Fieldpath Lunar Meteorite Graves Nunataks 06157, a Magnesian Piece of the Lunar Highlands Crust
NASA Technical Reports Server (NTRS)
Zeigler, Ryan A.; Korotev, R. L.; Korotev, R. L.
2012-01-01
To date, 49 feldspathic lunar meteorites (FLMs) have been recovered, likely representing a minimum of 35 different sample locations in the lunar highlands. The compositional variability among FLMs far exceeds the variability observed among highland samples in the Apollo and Luna sample suites. Here we will discuss in detail one of the compositional end members of the FLM suite, Graves Nunataks (GRA) 06157, which was collected by the 2006-2007 ANSMET field team. At 0.79 g, GRA 06157 is the smallest lunar meteorite so far recovered. Despite its small size, its highly feldspathic and highly magnesian composition are intriguing. Although preliminary bulk compositions have been reported, thus far no petrographic descriptions are in the literature. Here we expand upon the bulk compositional data, including major-element compositions, and provide a detailed petrographic description of GRA 06157.
Future mission studies: Preliminary comparisons of solar flux models
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.
NASA Astrophysics Data System (ADS)
Sun, Xuelian; Liu, Zixian
2016-02-01
In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Lekking without a paradox in the buff-breasted sandpiper
Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.
1997-01-01
Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
NASA Astrophysics Data System (ADS)
Efremova, T. T.; Avrova, A. F.; Efremov, S. P.
2016-09-01
The approaches of multivariate statistics have been used for the numerical classification of morphogenetic types of moss litters in swampy spruce forests according to their physicochemical properties (the ash content, decomposition degree, bulk density, pH, mass, and thickness). Three clusters of moss litters— peat, peaty, and high-ash peaty—have been specified. The functions of classification for identification of new objects have been calculated and evaluated. The degree of decomposition and the ash content are the main classification parameters of litters, though all other characteristics are also statistically significant. The final prediction accuracy of the assignment of a litter to a particular cluster is 86%. Two leading factors participating in the clustering of litters have been determined. The first factor—the degree of transformation of plant remains (quality)—specifies 49% of the total variance, and the second factor—the accumulation rate (quantity)— specifies 26% of the total variance. The morphogenetic structure and physicochemical properties of the clusters of moss litters are characterized.
Plasma properties of driver gas following interplanetary shocks observed by ISEE-3
NASA Technical Reports Server (NTRS)
Zwickl, R. D.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gosling, J. T.; Smith, E. J.
1983-01-01
Plasma fluid parameters calculated from solar wind and magnetic field data to determine the characteristic properties of driver gas following a select subset of interplanetary shocks were studied. Of 54 shocks observed from August 1978 to February 1980, 9 contained a well defined driver gas that was clearly identifiable by a discontinuous decrease in the average proton temperature. While helium enhancements were present downstream of the shock in all 9 of these events, only about half of them contained simultaneous changes in the two quantities. Simultaneous with the drop in proton temperature the helium and electron temperature decreased abruptly. In some cases the proton temperature depression was accompanied by a moderate increase in magnetic field magnitude with an unusually low variance, by a small decrease in the variance of the bulk velocity, and by an increase in the ratio of parallel to perpendicular temperature. The cold driver gas usually displayed a bidirectional flow of suprathermal solar wind electrons at higher energies.
NASA Astrophysics Data System (ADS)
Tanty, Kiranbala; Mukharjee, Bibhuti Bhusan; Das, Sudhanshu Shekhar
2018-06-01
The present study investigates the effect of replacement of coarse fraction of natural aggregates by recycled concrete aggregates on the properties of hot mix asphalt (HMA) using general factorial design approach. For this two factors i.e. recycled coarse aggregates percentage [RCA (%)] and bitumen content percentage [BC (%)] are considered. Tests have been carried out on the HMA type bituminous concrete, prepared with varying RCA (%) and BC (%). Analysis of variance has been performed on the experimental data to determine the effect of the chosen factors on various parameters such as stability, flow, air void, void mineral aggregate, void filled with bitumen and bulk density. The study depicts that RCA (%) and BC (%) have significant effect on the selected responses as p value is less than the chosen significance level. In addition to above, the outcomes of the statistical analysis indicate that interaction between factors have significant effects on void mineral aggregate and bulk density of bituminous concrete.
NASA Astrophysics Data System (ADS)
Tanty, Kiranbala; Mukharjee, Bibhuti Bhusan; Das, Sudhanshu Shekhar
2018-02-01
The present study investigates the effect of replacement of coarse fraction of natural aggregates by recycled concrete aggregates on the properties of hot mix asphalt (HMA) using general factorial design approach. For this two factors i.e. recycled coarse aggregates percentage [RCA (%)] and bitumen content percentage [BC (%)] are considered. Tests have been carried out on the HMA type bituminous concrete, prepared with varying RCA (%) and BC (%). Analysis of variance has been performed on the experimental data to determine the effect of the chosen factors on various parameters such as stability, flow, air void, void mineral aggregate, void filled with bitumen and bulk density. The study depicts that RCA (%) and BC (%) have significant effect on the selected responses as p value is less than the chosen significance level. In addition to above, the outcomes of the statistical analysis indicate that interaction between factors have significant effects on void mineral aggregate and bulk density of bituminous concrete.
Inventory of forest and rangeland and detection of forest stress
NASA Technical Reports Server (NTRS)
Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)
1973-01-01
The author has identified the following significant results. At the Atlanta site (226B) it was found that bulk color composites for October 15, 1972, and April 13, 1973, can be interpreted together to disclose the location of the perennial Kudzu vine (Pyeraria lobata). Land managers concerned with Kudzu eradication could use ERTS-1 to inventory locations over 200 meters (660 feet) square. Microdensitometer data collected on ERTS-1 Bulk photographic products for the Manitou test site (226C) have shown that the 15-step gray-scale tablets are not of systematic equal values corresponding to 1/14 the maximum radiant energy incident on the MSS sensor. The gray-scale values present a third-order polynomial function rather than a direct linear relationship. Although data collected on step tablets for precision photographic products appear more discrete, the density variation within blocks in almost as great as variations between blocks. These system errors will cause problems when attempting to analyze radiometric variances among vegetation and land use classes.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Estimation of stable boundary-layer height using variance processing of backscatter lidar data
NASA Astrophysics Data System (ADS)
Saeed, Umar; Rocadenbosch, Francesc
2017-04-01
Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.
NASA Astrophysics Data System (ADS)
Bunescu, C.; Marghitu, O.; Vogt, J.; Constantinescu, D.; Partamies, N.
2017-03-01
A substorm recovery event in the early morning sector is explored by means of ground and spacecraft data. The ground data are provided by stations of the MIRACLE network, in northern Scandinavia and Svalbard, while spacecraft data are observed by the Cluster satellites, toward the end of the recovery phase. Additional information is provided by the Fast Auroral SnapshoT (FAST) satellite, conjugate to Cluster 3 (C3). A prominent signature in the Cluster data is the low-frequency oscillations of the perturbation magnetic field, in the Pc5 range, interpreted in terms of a motion of quasi-stationary mesoscale field-aligned currents (FACs). Ground magnetic pulsations in the Ps6 range suggest that the Cluster observations are the high-altitude counterpart of the drifting auroral undulations, whose features thus can be explored closely. While multiscale minimum variance analysis provides information on the planarity, orientation, and scale of the FAC structures, the conjugate data from FAST and from the ground stations can be used to resolve also the azimuthal motion. A noteworthy feature of this event, revealed by the Cluster observations, is the apparent relaxation of the twisted magnetic flux tubes, from a sequence of 2-D current filaments to an undulated current sheet, on a timescale of about 10 min. This timescale appears to be consistent with the drift mirror instability in the inner magnetosphere, mapping to the equatorward side of the oval, or the Kelvin-Helmholtz instability related to bursty bulk flows farther downtail, mapping to the poleward side of the oval. However, more work is needed and a better event statistics, to confirm these tentative mechanisms as sources of Ω-like auroral undulations during late substorm recovery.
NASA Astrophysics Data System (ADS)
Thibault, N.; Jarvis, I.; Voigt, S.; Gale, A. S.; Attree, K.; Jenkyns, H. C.
2016-06-01
High-resolution records of bulk carbonate carbon isotopes have been generated for the Upper Coniacian to Lower Campanian interval of the sections at Seaford Head (southern England) and Bottaccione (central Italy). An unambiguous stratigraphic correlation is presented for the base and top of the Santonian between the Boreal and Tethyan realms. Orbital forcing of carbon and oxygen isotopes at Seaford Head points to the Boreal Santonian spanning five 405 kyr cycles (Sa1 to Sa5). Correlation of the Seaford Head time scale to that of the Niobrara Formation (Western Interior Basin) permits anchoring these records to the La2011 astronomical solution at the Santonian-Campanian (Sa/Ca) boundary, which has been recently dated to 84.19 ± 0.38 Ma. Among the five tuning options examined, option 2 places the Sa/Ca at the 84.2 Ma 405 kyr insolation minimum and appears as the most likely. This solution indicates that minima of the 405 kyr filtered output of the resistivity in the Niobrara Formation correlate to 405 kyr insolation minima in the astronomical solution and to maxima in the filtered δ13C of Seaford Head. We suggest that variance in δ13C is driven by climate forcing of the proportions of CaCO3 versus organic carbon burial on land and in oceanic basins. The astronomical calibration generates a 200 kyr mismatch of the Coniacian-Santonian boundary age between the Boreal Realm in Europe and the Western Interior, due either to diachronism of the lowest occurrence of the inoceramid Cladoceramus undulatoplicatus between the two regions or to remaining uncertainties of radiometric dating and cyclostratigraphic records.
Measuring the Power Spectrum with Peculiar Velocities
NASA Astrophysics Data System (ADS)
Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-01-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Power spectrum estimation from peculiar velocity catalogues
NASA Astrophysics Data System (ADS)
Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-09-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Berman, D Wayne; Brorby, Gregory P; Sheehan, Patrick J; Bogen, Kenneth T; Holm, Stewart E
2012-08-01
An ongoing research effort designed to reconstruct the character of historical exposures associated with use of chrysotile-containing joint compounds naturally raised questions concerning how the character (e.g. particle size distributions) of dusts generated from use of recreated materials compares to dusts from similar materials manufactured historically. This also provided an opportunity to further explore the relative degree that the characteristics of dusts generated from a bulk material are mediated by the properties of the bulk material versus the mechanical processes applied to the bulk material by which the dust is generated. In the current study, the characteristics of dusts generated from a recreated ready mix and recreated dry mix were compared to each other, to dusts from a historical dry mix, and to dusts from the commercial chrysotile fiber (JM 7RF3) used in the recreated materials. The effect of sanding on the character of dusts generated from these materials was also explored. Dusts from the dry materials studied were generated and captured for analysis in a dust generator-elutriator. The recreated and historical joint compounds were also prepared, applied to drywall, and sanded inside sealed bags so that the particles produced from sanding could be introduced into the elutriator and captured for analysis. Comparisons of fiber size distributions in dusts from these materials suggest that dust from commercial fiber is different from dusts generated from the joint compounds, which are mixtures, and the differences persist whether the materials are sanded or not. Differences were also observed between sanded recreated ready mix and either the recreated dry mix or a historical dry mix, again whether sanded or not. In all cases, however, such differences disappeared when variances obtained from surrogate data were used to better represent the 'irreducible variation' of these materials. Even using the smaller study-specific variances, no differences were observed between the recreated dry mix and the historical dry mix, indicating that chrysotile-containing joint compounds can be recreated using historical formulations such that the characteristics of the modern material reasonably mimic those of a corresponding historical material. Similarly, no significant differences were observed between dusts from sanded and unsanded versions of similar materials, suggesting (as in previous studies) that the characteristics of asbestos-containing dusts are mediated primarily by the properties of the bulk material from which they are derived.
Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji
2017-01-01
The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Energies of backstreaming protons in the foreshock
NASA Technical Reports Server (NTRS)
Greenstadt, E. W.
1976-01-01
A predicted pattern of energy vs detector location in the cislunar region is displayed for protons of zero pitch angle traveling upstream away from the quasi-parallel bow shock. The pattern is implied by upstream wave boundary properties. In the solar ecliptic, protons are estimated to have a minimum of 1.1 times the solar wind bulk energy E sub SW when the wave boundary is in the early morning sector and a maximum of 8.2 E sub SW when the boundary is near the predawn flank.
Solar Photovoltaic Array With Mini-Dome Fresnel Lenses
NASA Technical Reports Server (NTRS)
Piszczor, Michael F., Jr.; O'Neill, Mark J.
1994-01-01
Mini-dome Fresnel lenses concentrate sunlight onto individual photovoltaic cells. Facets of Fresnel lens designed to refract incident light at angle of minimum deviation to minimize reflective losses. Prismatic cover on surface of each cell reduces losses by redirecting incident light away from metal contacts toward bulk of semiconductor, where it is usefully absorbed. Simple design of mini-dome concentrator array easily adaptable to automated manufacturing techniques currently used by semiconductor industry. Attractive option for variety of future space missions.
1990-11-17
voltammetric response. As will be developed in this paper , the ability to observe sigmoidally shaped voltammograms requires a minimum number of solution ions...polished with I 4im diamond paste (Buehler). Similar results ,vere obtained using both methods of electrode construction. Precise values of the electrode...impurities in the bulk of the solution that can serve as an electrolyte, Cimp * We will assume for simplicity that all ionic i f11urities are 1: 1
Origin of magnetite and pyrrhotite in carbonaceous chondrites
Herndon, J.M.; Rowe, M.W.; Larson, E.E.; Watson, D.E.
1975-01-01
CARBONACEOUS chondrites, although comprising only about 2% of known meteorites, are extremely interesting for scientific investigation. Their mineral constitution, and the correspondence between their bulk chemical composition and the solar abundance of condensable elements, indicate that minimum chemical fractionation and thermal alteration have occurred. The mineral phases observed in these primitive chondrites are sufficiently unique, with respect to other meteorite classes, to have elicited considerable speculation about the physical environment in which they formed1-7. ?? 1975 Nature Publishing Group.
Optoelectronic Workshops. 11. Superlattice Disordering
1988-12-07
modulator ref.: Wood, JLWT §9, p743 (6/88) AaMQW =50x &acGaAs bulk (is comparable to LINbO3) residual aborption for typical device = 2 dB typical device...band edge due to unacceptable aborption losses Far from band edge, 10-4 -An-c10)-3 with minimum chirp Dominant electro-optic effect is quadratic- with...Bratton for technical support and KJ. Mackey for many helpful discussions. T.D. Golding would like to thank Prof. M. Pepper for assistance, and
Quantifying Hydrogen Bond Cooperativity in Water: VRT Spectroscopy of the Water Tetramer
NASA Astrophysics Data System (ADS)
Cruzan, J. D.; Braly, L. B.; Liu, Kun; Brown, M. G.; Loeser, J. G.; Saykally, R. J.
1996-01-01
Measurement of the far-infrared vibration-rotation tunneling spectrum of the perdeuterated water tetramer is described. Precisely determined rotational constants and relative intensity measurements indicate a cyclic quasi-planar minimum energy structure, which is in agreement with recent ab initio calculations. The O-O separation deduced from the data indicates a rapid exponential convergence to the ordered bulk value with increasing cluster size. Observed quantum tunneling splittings are interpreted in terms of hydrogen bond rearrangements connecting two degenerate structures.
Electro-optical SLS devices for operating at new wavelength ranges
Osbourn, Gordon C.
1986-01-01
An intrinsic semiconductor electro-optical device includes a p-n junction intrinsically responsive, when cooled, to electromagnetic radiation in the wavelength range of 8-12 um. The junction consists of a strained-layer superlattice of alternating layers of two different III-V semiconductors having mismatched lattice constants when in bulk form. A first set of layers is either InAs.sub.1-x Sb.sub.x (where x is aobut 0.5 to 0.7) or In.sub.1-x Ga.sub.x As.sub.1-y Sb.sub.y (where x and y are chosen such that the bulk bandgap of the resulting layer is about the same as the minimum bandgap in the In.sub.1-x Ga.sub.x As.sub.1-y Sb.sub.y family). The second set of layers has a lattice constant larger than the lattice constant of the layers in the first set.
NASA Astrophysics Data System (ADS)
Wilczek, Sebastian; Trieschmann, Jan; Schulze, Julian; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Derzsi, Aranka; Korolov, Ihor; Donkó, Zoltan
2013-09-01
The electron heating in capacitive discharges at very low pressures (~1 Pa) is dominated by stochastic heating. In this regime electrons are accelerated by the oscillating sheaths, traverse through the plasma bulk and interact with the opposite sheath. By varying the driving frequency or the gap size of the discharge, energetic electrons reach the sheath edge at different temporal phases, i.e., the collapsing or expanding phase, or the moment of minimum sheath width. This work reports numerical experiments based on Particle-In-Cell simulations which show that at certain frequencies the discharge switches abruptly from a low density mode in a high density mode. The inverse transition is also abrupt, but shows a significant hysteresis. This behavior is explained by the complex interaction of the bulk and the sheath. This work is supported by the German Research Foundation in the frame of TRR 87.
Physical properties and depth of cure of a new short fiber reinforced composite.
Garoushi, Sufyan; Säilynoja, Eija; Vallittu, Pekka K; Lassila, Lippo
2013-08-01
To determine the physical properties and curing depth of a new short fiber composite intended for posterior large restorations (everX Posterior) in comparison to different commercial posterior composites (Alert, TetricEvoCeram Bulk Fill, Voco X-tra base, SDR, Venus Bulk Fill, SonicFill, Filtek Bulk Fill, Filtek Superme, and Filtek Z250). In addition, length of fiber fillers of composite XENIUS base compared to the previously introduced composite Alert has been measured. The following properties were examined according to ISO standard 4049: flexural strength, flexural modulus, fracture toughness, polymerization shrinkage and depth of cure. The mean and standard deviation were determined and all results were statistically analyzed with analysis of variance ANOVA (a=0.05). XENIUS base composite exhibited the highest fracture toughness (4.6MPam(1/2)) and flexural strength (124.3MPa) values and the lower shrinkage strain (0.17%) among the materials tested. Alert composite revealed the highest flexural modulus value (9.9GPa), which was not significantly different from XENIUS base composite (9.5GPa). Depth of cure of XENIUS base (4.6mm) was similar than those of bulk fill composites and higher than other hybrid composites. The length of fiber fillers in XENIUS base was longer (1.3-2mm) than in Alert (20-60μm). The new short fiber composite differed significantly in its physical properties compared to other materials tested. This suggests that the latter could be used in high-stress bearing areas. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Isufi, Almira; Plotino, Gianluca; Grande, Nicola Maria; Ioppolo, Pietro; Testarelli, Luca; Bedini, Rossella; Al-Sudani, Dina; Gambarini, Gianluca
2016-01-01
Summary Aim To determine and compare the fracture resistance of endodontically treated teeth restored with a bulk fill flowable material (SDR) and a traditional resin composite. Methods Thirty maxillary and 30 mandibular first molars were selected based on similar dimensions. After cleaning, shaping and filling of the root canals and adhesive procedures, specimens were assigned to 3 subgroups for each tooth type (n=10): Group A: control group, including intact teeth; Group B: access cavities were restored with a traditional resin composite (EsthetX; Dentsply-Italy, Rome, Italy); Group C: access cavities were restored with a bulk fill flowable composite (SDR; Dentsply-Italy), except 1.5 mm layer of the occlusal surface that was restored with the same resin composite as Group B. The specimens were subjected to compressive force in a material static-testing machine until fracture occurred, the maximum fracture load of the specimens was measured (N) and the type of fracture was recorded as favorable or unfavorable. Data were statistically analyzed with one-way analysis of variance (ANOVA) and Bonferroni tests (P<0.05). Results No statistically significant differences were found among groups (P<0.05). Fracture resistance of endodontically treated teeth restored with a traditional resin composite and with a bulk fill flowable composite (SDR) was similar in both maxillary and mandibular molars and showed no significant decrease in fracture resistance compared to intact specimens. Conclusions No significant difference was observed in the mechanical fracture resistance of endodontically treated molars restored with traditional resin composite restorations compared to bulk fill flowable composite restorations. PMID:27486505
Isufi, Almira; Plotino, Gianluca; Grande, Nicola Maria; Ioppolo, Pietro; Testarelli, Luca; Bedini, Rossella; Al-Sudani, Dina; Gambarini, Gianluca
2016-01-01
To determine and compare the fracture resistance of endodontically treated teeth restored with a bulk fill flowable material (SDR) and a traditional resin composite. Thirty maxillary and 30 mandibular first molars were selected based on similar dimensions. After cleaning, shaping and filling of the root canals and adhesive procedures, specimens were assigned to 3 subgroups for each tooth type (n=10): Group A: control group, including intact teeth; Group B: access cavities were restored with a traditional resin composite (EsthetX; Dentsply-Italy, Rome, Italy); Group C: access cavities were restored with a bulk fill flowable composite (SDR; Dentsply-Italy), except 1.5 mm layer of the occlusal surface that was restored with the same resin composite as Group B. The specimens were subjected to compressive force in a material static-testing machine until fracture occurred, the maximum fracture load of the specimens was measured (N) and the type of fracture was recorded as favorable or unfavorable. Data were statistically analyzed with one-way analysis of variance (ANOVA) and Bonferroni tests (P<0.05). No statistically significant differences were found among groups (P<0.05). Fracture resistance of endodontically treated teeth restored with a traditional resin composite and with a bulk fill flowable composite (SDR) was similar in both maxillary and mandibular molars and showed no significant decrease in fracture resistance compared to intact specimens. No significant difference was observed in the mechanical fracture resistance of endodontically treated molars restored with traditional resin composite restorations compared to bulk fill flowable composite restorations.
NASA Astrophysics Data System (ADS)
Crapo, Alan D.; Lloyd, Jerry D.
1991-03-01
Two motors have been designed and built for use with high-temperature superconductor (HTSC) materials. They are a homopolar dc motor that uses HTSC field windings and a brushless dc motor that uses bulk HTSC materials to trap flux in steel rotor poles. The HTSC field windings of the homopolar dc motor are designed to operate at 1000 A/sq cm in a 0.010-T field. In order to maximize torque in the homopolar dc motor, an iron magnetic circuit with small air gaps gives maximum flux for minimum Ampere turns in the field. A copper field winding version of the homopolar dc motor has been tested while waiting for 575 A turn HTSC coils. The trapped flux brushless dc motor has been built and is ready to test melt textured bulk HTSC rings that are currently being prepared. The stator of the trapped flux motor will impress a magnetic field in the steel rotor poles with warm HTSC bulk rings. The rings are then cooled to 77 K to trap the flux in the rotor. The motor can then operate as a brushless dc motor.
Nematic superconductivity in CuxBi2Se3 : Surface Andreev bound states
NASA Astrophysics Data System (ADS)
Hao, Lei; Ting, C. S.
2017-10-01
We study theoretically the topological surface states (TSSs) and the possible surface Andreev bound states (SABSs) of CuxBi2Se3 , which is known to be a topological insulator at x =0 . The superconductivity (SC) pairing of this compound is assumed to have broken spin-rotation symmetry, similar to that of the A-phase of 3He as suggested by recent nuclear-magnetic resonance experiments. For both spheroidal and corrugated cylindrical Fermi surfaces with the hexagonal warping terms, we show that the bulk SC gap is rather anisotropic; the minimum of the gap is negligibly small as compared to the maximum of the gap. This would make the fully gapped pairing effectively nodal. For a clean system, our results indicate the bulk of this compound to be a topological superconductor with the SABSs appearing inside the bulk SC gap. The zero-energy SABSs, which are Majorana fermions, together with the TSSs not gapped by the pairing, produce a zero-energy peak in the surface density of states (SDOS). The SABSs are expected to be stable against short-range nonmagnetic impurities, and the local SDOS is calculated around a nonmagnetic impurity. The relevance of our results to experiments is discussed.
VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)
NASA Astrophysics Data System (ADS)
Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.
2013-11-01
We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Vicarious resilience in sexual assault and domestic violence advocates.
Frey, Lisa L; Beesley, Denise; Abbott, Deah; Kendrick, Elizabeth
2017-01-01
There is little research related to sexual assault and domestic violence advocates' experiences, with the bulk of the literature focused on stressors and systemic barriers that negatively impact efforts to assist survivors. However, advocates participating in these studies have also emphasized the positive impact they experience consequent to their work. This study explores the positive impact. Vicarious resilience, personal trauma experiences, peer relational quality, and perceived organizational support in advocates (n = 222) are examined. Also, overlap among the conceptual components of vicarious resilience is explored. The first set of multiple regressions showed that personal trauma experiences and peer relational health predicted compassion satisfaction and vicarious posttraumatic growth, with organizational support predicting only compassion satisfaction. The second set of multiple regressions showed that (a) there was significant shared variance between vicarious posttraumatic growth and compassion satisfaction; (b) after accounting for vicarious posttraumatic growth, organizational support accounted for significant variance in compassion satisfaction; and (c) after accounting for compassion satisfaction, peer relational health accounted for significant variance in vicarious posttraumatic growth. Results suggest that it may be more meaningful to conceptualize advocates' personal growth related to their work through the lens of a multidimensional construct such as vicarious resilience. Organizational strategies promoting vicarious resilience (e.g., shared organizational power, training components) are offered, and the value to trauma-informed care of fostering advocates' vicarious resilience is discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
NASA Astrophysics Data System (ADS)
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Rawlins, B. G.; Lark, T. A.
2014-05-01
The LUCAS Topsoil survey is a pan-European Union initiative in which soil data were collected according to standard protocols from 19 967 sites. Any inference about soil variables is subject to uncertainty due to different sources of variability in the data. In this study we examine the likely magnitude of uncertainty due to the field-sampling protocol. The published sampling protocol (LUCAS, 2009) describes a procedure to form a composite soil sample from aliquots collected to a depth of between approximately 15-20. A v-shaped hole to the target depth is cut with a spade, then a slice is cut from one of the exposed surfaces. This methodology gives rather less control of the sampling depth than protocols used in other soil and geochemical surveys, this may be a substantial source of variation in uncultivated soils with strong contrasts between an organic-rich A-horizon and an underlying B-horizon. We extracted all representative profile descriptions from soil series recorded in the memoir of the 1:250 000-scale map of Northern England (Soil Survey of England and Wales, 1984) where the base of the A-horizon is less than 20 cm below the surface. The Soil Associations in which these 14 series are significant members cover approximately 17% of the area of Northern England, and are expected to be the mineral soils with the largest organic content. Soil Organic Carbon content and bulk density were extracted for the A- and B-horizons, along with the thickness of the horizons. Recorded bulk density, or prediction by a pedotransfer function, were also recorded. For any proposed angle of the v-shaped hole, the proportions of A- and B-horizon in the resulting sample may be computed by trigonometry. From the bulk density and SOC concentration of the horizons, the SOC concentration of the sample can be computed. For each Soil Series we drew 1000 random samples from a trapezoidal distribution of angles, with uniform density over the range corresponding to depths 15-20 cm and zero density for angles corresponding to depths larger than 21 cm or less than 14 cm. We computed the corresponding variance of sample SOC contents. We found that the variance in SOC determinations attributable to variation in sample depth for these uncultivated soils was of the same order of magnitude as the estimate of the subsampling + analytical variance component (both on a log scale) that we previously computed for soils in the UK (Rawlins et al., 2009). It seems unnecessary to accept this source of uncertainty, given the effort undertaken to reduce the analytical variation which is no larger (and often smaller) than this variation due to the field protocol. If pan-European soil monitoring is to be based on the LUCAS Topsoil survey, as suggested by an initial report, uncertainty could be reduced if the sampling depth was specified to a unique depth, rather than the current depth range. LUCAS. 2009. Instructions for Surveyors. Technical reference document C-1: General implementation, Land Cover and Use, Water management, Soil, Transect, Photos. European Commission, Eurostat. Rawlins, B.G., Scheib, A.J., Lark, R.M. & Lister, T.R. 2009. Sampling and analytical plus subsampling variance components for five soil indicators observed at regional scale. European Journal of Soil Science 60, 740-747
Torsional shear flow of granular materials: shear localization and minimum energy principle
NASA Astrophysics Data System (ADS)
Artoni, Riccardo; Richard, Patrick
2018-01-01
The rheological properties of granular matter submitted to torsional shear are investigated numerically by means of discrete element method. The shear cell is made of a cylinder filled by grains which are sheared by a bumpy bottom and submitted to a vertical pressure which is applied at the top. Regimes differing by their strain localization features are observed. They originate from the competition between dissipation at the sidewalls and dissipation in the bulk of the system. The effects of the (i) the applied pressure, (ii) sidewall friction, and (iii) angular velocity are investigated. A model, based on the purely local μ (I)-rheology and a minimum energy principle is able to capture the effect of the two former quantities but unable to account the effect of the latter. Although, an ad hoc modification of the model allows to reproduce all the numerical results, our results point out the need for an alternative rheology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosaki, Y., E-mail: yosuke.kurosaki.uy@hitachi.com; Yabuuchi, S.; Nishide, A.
We report a lowered lattice thermal conductivity in nm-scale MnSi{sub 1.7}/Si multilayers which were fabricated by controlling thermal diffusions of Mn and Si atoms. The thickness of the constituent layers is 1.5–5.0 nm, which is comparable to the phonon mean free path of both MnSi{sub 1.7} and Si. By applying the above nanostructures, we reduced the lattice thermal conductivity down to half that of bulk MnSi{sub 1.7}/Si composite materials. The obtained value of 1.0 W/K m is the experimentally observed minimum in MnSi{sub 1.7}-based materials without any heavy element doping and close to the minimum thermal conductivity. We attribute the reduced latticemore » thermal conductivity to phonon scattering at the MnSi{sub 1.7}/Si interfaces in the multilayers.« less
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, Jill S.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2016-09-30
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Additive-Multiplicative Approximation of Genotype-Environment Interaction
Gimelfarb, A.
1994-01-01
A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.
1980-01-01
Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.
Shibasaki, S; Takamizawa, T; Nojiri, K; Imai, A; Tsujimoto, A; Endo, H; Suzuki, S; Suda, S; Barkmeier, W W; Latta, M A; Miyazaki, M
The present study determined the mechanical properties and volumetric polymerization shrinkage of different categories of resin composite. Three high viscosity bulk fill resin composites were tested: Tetric EvoCeram Bulk Fill (TB, Ivoclar Vivadent), Filtek Bulk Fill posterior restorative (FB, 3M ESPE), and Sonic Fill (SF, Kerr Corp). Two low-shrinkage resin composites, Kalore (KL, GC Corp) and Filtek LS Posterior (LS, 3M ESPE), were used. Three conventional resin composites, Herculite Ultra (HU, Kerr Corp), Estelite ∑ Quick (EQ, Tokuyama Dental), and Filtek Supreme Ultra (SU, 3M ESPE), were used as comparison materials. Following ISO Specification 4049, six specimens for each resin composite were used to determine flexural strength, elastic modulus, and resilience. Volumetric polymerization shrinkage was determined using a water-filled dilatometer. Data were evaluated using analysis of variance followed by Tukey's honestly significant difference test (α=0.05). The flexural strength of the resin composites ranged from 115.4 to 148.1 MPa, the elastic modulus ranged from 5.6 to 13.4 GPa, and the resilience ranged from 0.70 to 1.0 MJ/m 3 . There were significant differences in flexural properties between the materials but no clear outliers. Volumetric changes as a function of time over a duration of 180 seconds depended on the type of resin composite. However, for all the resin composites, apart from LS, volumetric shrinkage began soon after the start of light irradiation, and a rapid decrease in volume during light irradiation followed by a slower decrease was observed. The low shrinkage resin composites KL and LS showed significantly lower volumetric shrinkage than the other tested materials at the measuring point of 180 seconds. In contrast, the three bulk fill resin composites showed higher volumetric change than the other resin composites. The findings from this study provide clinicians with valuable information regarding the mechanical properties and polymerization kinetics of these categories of current resin composite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaetsu, I.; Ito, A.; Hayashi, K.
1973-06-01
The effect of homogeneity of polymerization phase and monomer concentration on the temperature dependence of initial polymerization rate was studied in the radiation-induced radical polymerization of binary systems consisting of glass-forming monomer and solvent. In the polymerization of a completely homogeneous system such as HEMA-propylene glycol, a maximum and a minimum in polymerization rates as a function of temperature, characteristic of the polymerization in glass-forming systems, were observed for all monomer concentrations. However, in the heterogeneous polymerization systems such as HEMA-triacetin and HEMAisoamyl acetate, maximum and minimum rates were observed in monomer-rich compositions but not at low monomer concentrations. Furthermore,more » in the HEMA-dioctyl phthalate polymerization system, which is extremely heterogeneous, no maximum and minimum rates were observed at any monomer concentration. The effect of conversion on the temperature dependence of polymerization rate in homogeneous bulk polymerization of HEMA and GMA was investigated. Maximum and minimum rates were observed clearly in conversions less than 10% in the case of HEMA and less than 50% in the case of GMA, but the maximum and minimum changed to a mere inflection in the curve at higher conversions. A similar effect of polymer concentration on the temperature dependence of polymerization rate in the GMA-poly(methyl methacrylate) system was also observed. It is deduced that the change in temperature dependence of polymerization rate is attributed to the decrease in contribution of mutual termination reaction of growing chain radicals to the polymerization rate. (auth)« less
Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T
2013-12-11
The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping
2016-02-01
Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.
Spatial variability of shelf sediments in the STRATAFORM natural laboratory, Northern California
Goff, J.A.; Wheatcroft, R.A.; Lee, H.; Drake, D.E.; Swift, D.J.P.; Fan, S.
2002-01-01
The "Correlation Length Experiment", an intensive box coring effort on the Eel River shelf (Northern California) in the summer of 1997, endeavored to characterize the lateral variability of near-surface shelf sediments over scales of meters to kilometers. Coring focused on two sites, K60 and S60, separated by ??? 15 km along the 60 m isobath. The sites are near the sand-to-mud transition, although K60 is sandier owing to its proximity to the Eel River mouth. Nearly 140 cores were collected on dip and strike lines with core intervals from < 10m to 1 km. Measurements on each core included bulk density computed from gamma-ray attenuation, porosity converted from resistivity measurements, and surficial grain size. Grain size was also measured over the full depth range within a select subset of cores. X-radiograph images were also examined. Semi-variograms were computed for strike, dip, and down-hole directions at each site. The sand-to-mud transition exerts a strong influence on all measurements: on average, bulk density increases and porosity decreases with regional increases in mean grain size. Analysis of bulk density measurements indicates very strong contrasts in the sediment variability at K60 and S60. No coherent bedding is seen at K60; in the strike direction, horizontal variability is "white" (fully uncorrelated) from the smallest scales examined (a few meters) to the largest (8 km), with a variance equal to that seen within the cores. In contrast, coherent bedding exists at S60 related to the preservation of the 1995 flood deposit. A correlatable structure is found in the strike direction with a decorrelation distance of ??? 800 m, and can be related to long-wavelength undulations in the topography and/or thickness of the flood layer or overburden. We hypothesize that the high degree of bulk density variability at K60 is a result of more intense physical reworking of the seabed in the sandier environment. Without significant averaging, the resistivity-based porosity measurements are only marginally correlated to gamma-ray-bulk density measurements, and are largely independent of mean grain size. Furthermore, porosity displays a high degree of incoherent variability at both sites. Porosity, with a much smaller sample volume than bulk density, may therefore resolve small-scale biogenic variability which is filtered out in the bulk density measurement. ?? 2002 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Longford, Francis G. J.; Essex, Jonathan W.; Skylaris, Chris-Kriton; Frey, Jeremy G.
2018-06-01
We present an unexpected finite size effect affecting interfacial molecular simulations that is proportional to the width-to-surface-area ratio of the bulk phase Ll/A. This finite size effect has a significant impact on the variance of surface tension values calculated using the virial summation method. A theoretical derivation of the origin of the effect is proposed, giving a new insight into the importance of optimising system dimensions in interfacial simulations. We demonstrate the consequences of this finite size effect via a new way to estimate the surface energetic and entropic properties of simulated air-liquid interfaces. Our method is based on macroscopic thermodynamic theory and involves comparing the internal energies of systems with varying dimensions. We present the testing of these methods using simulations of the TIP4P/2005 water forcefield and a Lennard-Jones fluid model of argon. Finally, we provide suggestions of additional situations, in which this finite size effect is expected to be significant, as well as possible ways to avoid its impact.
Galactic heavy-ion shielding using electrostatic fields
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1984-01-01
The shielding of spacecraft against galactic heavy ions, particularly high-energy Fe(56) nuclei, by electrostatic fields is analyzed for an arrangement of spherical concentric shells. Vacuum breakdown considerations are found to limit the minimum radii of the spheres to over 100 m. This limitation makes it impractical to use the fields for shielding small spacecraft. The voltages necessary to repel these Fe(56) nuclei exceed present electrostatic generating capabilities by over 2 orders of magnitude and render the concept useless as an alternative to traditional bulk-material shielding methods.
Mittelstaedt, Daniel
2015-01-01
Objective A quantitative contrast-enhanced micro–computed tomography (qCECT) method was developed to investigate the depth dependency and heterogeneity of the glycosaminoglycan (GAG) concentration of ex vivo cartilage equilibrated with an anionic radiographic contrast agent, Hexabrix. Design Full-thickness fresh native (n = 19 in 3 subgroups) and trypsin-degraded (n = 6) articular cartilage blocks were imaged using micro–computed tomography (μCT) at high resolution (13.4 μm3) before and after equilibration with various Hexabrix bathing concentrations. The GAG concentration was calculated depth-dependently based on Gibbs-Donnan equilibrium theory. Analysis of variance with Tukey’s post hoc was used to test for statistical significance (P < 0.05) for effect of Hexabrix bathing concentration, and for differences in bulk and zonal GAG concentrations individually and compared between native and trypsin-degraded cartilage. Results The bulk GAG concentration was calculated to be 74.44 ± 6.09 and 11.99 ± 4.24 mg/mL for native and degraded cartilage, respectively. A statistical difference was demonstrated for bulk and zonal GAG between native and degraded cartilage (P < 0.032). A statistical difference was not demonstrated for bulk GAG when comparing Hexabrix bathing concentrations (P > 0.3214) for neither native nor degraded cartilage. Depth-dependent GAG analysis of native cartilage revealed a statistical difference only in the radial zone between 30% and 50% Hexabrix bathing concentrations. Conclusions This nondestructive qCECT methodology calculated the depth-dependent GAG concentration for both native and trypsin-degraded cartilage at high spatial resolution. qCECT allows for more detailed understanding of the topography and depth dependency, which could help diagnose health, degradation, and repair of native and contrived cartilage. PMID:26425259
Schottky-contact plasmonic dipole rectenna concept for biosensing.
Alavirad, Mohammad; Mousavi, Saba Siadat; Roy, Langis; Berini, Pierre
2013-02-25
Nanoantennas are key optical components for several applications including photodetection and biosensing. Here we present an array of metal nano-dipoles supporting surface plasmon polaritons (SPPs) integrated into a silicon-based Schottky-contact photodetector. Incident photons coupled to the array excite SPPs on the Au nanowires of the antennas which decay by creating "hot" carriers in the metal. The hot carriers may then be injected over the potential barrier at the Au-Si interface resulting in a photocurrent. High responsivities of 100 mA/W and practical minimum detectable powers of -12 dBm should be achievable in the infra-red (1310 nm). The device was then investigated for use as a biosensor by computing its bulk and surface sensitivities. Sensitivities of ∼ 250 nm/RIU (bulk) and ∼ 8 nm/nm (surface) in water are predicted. We identify the mode propagating and resonating along the nanowires of the antennas, we apply a transmission line model to describe the performance of the antennas, and we extract two useful formulas to predict their bulk and surface sensitivities. We prove that the sensitivities of dipoles are much greater than those of similar monopoles and we show that this difference comes from the gap in dipole antennas where electric fields are strongly enhanced.
Uptake of CeO2 nanoparticles and its effect on growth of Medicago arborea In vitro plantlets.
Gomez-Garay, Aranzazu; Pintos, Beatriz; Manzanera, Jose Antonio; Lobo, Carmen; Villalobos, Nieves; Martín, Luisa
2014-10-01
The present study analyzes some effects of nano-CeO2 particles on the growth of in vitro plantlets of Medicago arborea when the nanoceria was added to the culture medium. Various concentrations of nano-CeO2 and bulk ceric oxide particles in suspension form were introduced to the agar culture medium to compare the effects of nanoceria versus ceric oxide bulk material. Germination rate and shoot dry weight were not affected by the addition of ceric oxide to the culture media. Furthermore, no effects were observed on chlorophyll content (single-photon avalanche diode (SPAD) measurements) due to the presence of either nano- or micro-CeO2 in the culture medium. When low concentrations of nanoceria were added to the medium, the number of trifoliate leaves and the root length increased but the root dry weight decreased. Also the values of maximum photochemical efficiency of PSII (F(v)/F m) showed a significant decrease. Dark-adapted minimum fluorescence (F 0) significantly increased in the presence of 200 mg L(-1) nanoceria and 400 mg L(-1) bulk material. Root tissues were more sensitive to nanoceria than were the shoots at lower concentrations of nanoceria. A stress effect was observed on M. arborea plantlets due to cerium uptake.
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2018-04-01
This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.
Moghaddam, Arasb Dabbagh; Pero, Milad; Askari, Gholam Reza
2017-01-01
In this study, the effects of main spray drying conditions such as inlet air temperature (100-140 °C), maltodextrin concentration (MDC: 30-60%), and aspiration rate (AR) (30-50%) on the physicochemical properties of sour cherry powder such as moisture content (MC), hygroscopicity, water solubility index (WSI), and bulk density were investigated. This investigation was carried out by employing response surface methodology and the process conditions were optimized by using this technique. The MC of the powder was negatively related to the linear effect of the MDC and inlet air temperature (IT) and directly related to the AR. Hygroscopicity of the powder was significantly influenced by the MDC. By increasing MDC in the juice, the hygroscopicity of the powder was decreased. MDC and inlet temperature had a positive effect, but the AR had a negative effect on the WSI of powder. MDC and inlet temperature negatively affected the bulk density of powder. By increasing these two variables, the bulk density of powder was decreased. The optimization procedure revealed that the following conditions resulted in a powder with the maximum solubility and minimum hygroscopicity: MDC = 60%, IT = 134 °C, and AR = 30% with a desirability of 0.875.
Peutzfeldt, A; Mühlebach, S; Lussi, A; Flury, S
The aim of this in vitro study was to investigate the marginal gap formation of a packable "regular" resin composite (Filtek Supreme XTE [3M ESPE]) and two flowable "bulk fill" resin composites (Filtek Bulk Fill [3M ESPE] and SDR [DENTSPLY DeTrey]) along the approximal margins of Class II restorations. In each of 39 extracted human molars (n=13 per resin composite), mesial and distal Class II cavities were prepared, placing the gingival margins below the cemento-enamel junction. The cavities were restored with the adhesive system OptiBond FL (Kerr) and one of the three resin composites. After restoration, each molar was cut in half in the oro-vestibular direction between the two restorations, resulting in two specimens per molar. Polyvinylsiloxane impressions were taken and "baseline" replicas were produced. The specimens were then divided into two groups: At the beginning of each month over the course of six months' tap water storage (37°C), one specimen per molar was subjected to mechanical toothbrushing, whereas the other was subjected to thermocycling. After artificial ageing, "final" replicas were produced. Baseline and final replicas were examined under the scanning electron microscope (SEM), and the SEM micrographs were used to determine the percentage of marginal gap formation in enamel or dentin. Paramarginal gaps were registered. The percentages of marginal gap formation were statistically analyzed with a nonparametric analysis of variance followed by Wilcoxon-Mann-Whitney tests and Wilcoxon signed rank tests, and all p-values were corrected with the Bonferroni-Holm adjustment for multiple testing (significance level: α=0.05). Paramarginal gaps were analyzed descriptively. In enamel, significantly lower marginal gap formation was found for Filtek Supreme XTE compared to Filtek Bulk Fill ( p=0.0052) and SDR ( p=0.0289), with no significant difference between Filtek Bulk Fill and SDR ( p=0.4072). In dentin, significantly lower marginal gap formation was found for SDR compared to Filtek Supreme XTE ( p<0.0001) and Filtek Bulk Fill ( p=0.0015), with no significant difference between Filtek Supreme XTE and Filtek Bulk Fill ( p=0.4919). Marginal gap formation in dentin was significantly lower than in enamel ( p<0.0001). The percentage of restorations with paramarginal gaps varied between 0% and 85%, and for all three resin composites the percentages were markedly higher after artificial ageing. The results from this study suggest that in terms of marginal gap formation in enamel, packable resin composites may be superior to flowable "bulk fill" resin composites, while in dentin some flowable "bulk fill" resin composites may be superior to packable ones.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.
Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V
2014-12-01
Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.
2012-01-01
The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Grimm, Robert E.; Phillips, Roger J.
1991-01-01
Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.
Minimal Model of Prey Localization through the Lateral-Line System
NASA Astrophysics Data System (ADS)
Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo
2003-10-01
The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.
Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M
2014-01-01
In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Taylor, Patrick J.; Trivedi, Sudhir B.; Kutcher, Susan
2012-01-01
Thermoelectric (TE) power generation is an increasingly important power generation technology. Major advantages include: no moving parts, low-weight, modularity, covertness/silence, high power density, low amortized cost, and long service life with minimum or no required maintenance. Despite low efficiency of power generation, there are many specialized needs for electrical power that TE technologies can uniquely and successfully address. Recent advances in thermoelectric materials technology have rekindled acute interest in thermoelectric power generation. We have developed single crystalline n- and p- type PbTe crystals and are also, developing PbTe bulk nanocomposites using PbTe nano powders and emerging filed assisted sintering technology (FAST). We will discuss the materials requirements for efficient thermoelectric power generation using waste heat at intermediate temperature range (6500 to 8500 K). We will present our recent results on production of n- and p- type PbTe crystals and their thermoelectric characterization. Relative characteristics and performance of PbTe bulk single crystals and nano composites for thermoelectric power generation will be discussed.
Concept and design of super junction devices
NASA Astrophysics Data System (ADS)
Zhang, Bo; Zhang, Wentong; Qiao, Ming; Zhan, Zhenya; Li, Zhaoji
2018-02-01
The super junction (SJ) has been recognized as the " milestone” of the power MOSFET, which is the most important innovation concept of the voltage-sustaining layer (VSL). The basic structure of the SJ is a typical junction-type VSL (J-VSL) with the periodic N and P regions. However, the conventional VSL is a typical resistance-type VSL (R-VSL) with only an N or P region. It is a qualitative change of the VSL from the R-VSL to the J-VSL, introducing the bulk depletion to increase the doping concentration and optimize the bulk electric field of the SJ. This paper firstly summarizes the development of the SJ, and then the optimization theory of the SJ is discussed for both the vertical and the lateral devices, including the non-full depletion mode, the minimum specific on-resistance optimization method and the equivalent substrate model. The SJ concept breaks the conventional " silicon limit” relationship of R on∝V B 2.5, showing a quasi-linear relationship of R on∝V B 1.03.
Wavefunction Properties and Electronic Band Structures of High-Mobility Semiconductor Nanosheet MoS2
NASA Astrophysics Data System (ADS)
Baik, Seung Su; Lee, Hee Sung; Im, Seongil; Choi, Hyoung Joon; Ccsaemp Team; Edl Team
2014-03-01
Molybdenum disulfide (MoS2) nanosheet is regarded as one of the most promising alternatives to the current semiconductors due to its significant band-gap and electron-mobility enhancement upon exfoliating. To elucidate such thickness-dependent properties, we have studied the electronic band structures of bulk and monolayer MoS2 by using the first-principles density-functional method as implemented in the SIESTA code. Based on the wavefunction analyses at the conduction band minimum (CBM) points, we have investigated possible origins of mobility difference between bulk and monolayer MoS2. We provide formation energies of substitutional impurities at the Mo and S sites, and discuss feasible electron sources which may induce a significant difference in the carrier lifetime. This work was supported by NRF of Korea (Grant Nos. 2009-0079462 and 2011-0018306), Nano-Material Technology Development Program (2012M3a7B4034985), and KISTI supercomputing center (Project No. KSC-2013-C3-008). Center for Computational Studies of Advanced Electronic Material Properties.
NASA Astrophysics Data System (ADS)
Ishii, Yuichiro; Tanaka, Miki; Yabuuchi, Makoto; Sawada, Yohei; Tanaka, Shinji; Nii, Koji; Lu, Tien Yu; Huang, Chun Hsien; Sian Chen, Shou; Tse Kuo, Yu; Lung, Ching Cheng; Cheng, Osbert
2018-04-01
We propose a highly symmetrical 10 transistor (10T) 2-read/write (2RW) dual-port (DP) static random access memory (SRAM) bitcell in 28 nm high-k/metal-gate (HKMG) planar bulk CMOS. It replaces the conventional 8T 2RW DP SRAM bitcell without any area overhead. It significantly improves the robustness of process variations and an asymmetric issue between the true and bar bitline pairs. Measured data show that read current (I read) and read static noise margin (SNM) are respectively boosted by +20% and +15 mV by introducing the proposed bitcell with enlarged pull-down (PD) and pass-gate (PG) N-channel MOSs (NMOSs). The minimum operating voltage (V min) of the proposed 256 kbit 10T DP SRAM is 0.53 V in the TT process, 25 °C under the worst access condition with read/write disturbances, and improved by 90 mV (15%) compared with the conventional one.
NASA Astrophysics Data System (ADS)
J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul
2015-05-01
In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.
Claw length recommendations for dairy cow foot trimming
Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.
2015-01-01
The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
NASA Technical Reports Server (NTRS)
Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)
2001-01-01
A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.
Particle morphology dependent superhydrophobicity in treated diatomaceous earth/polystyrene coatings
NASA Astrophysics Data System (ADS)
Sedai, Bhishma R.; Alavi, S. Habib; Harimkar, Sandip P.; McCollum, Mark; Donoghue, Joseph F.; Blum, Frank D.
2017-09-01
Superhydrophobic surfaces have been prepared from three different types of diatomaceous earth (DE) particles treated with 3-(heptafluoroisopropoxy)propyltrimethoxysilane (HFIP-TMS) and low molecular mass polystyrene. The untreated particles, consisting of CelTix DE (disk shape), DiaFil DE (rod shape) and EcoFlat DE (irregular), were studied using particle size analysis, bulk density, pore volume and surface area analysis (via Brunauer-Emmett-Teller, BET, methods). The treated particles were characterized with thermogravimetric analysis (TGA), contact angles, scanning electron microscopy, profilometry, and FTIR spectroscopy. The minimum amount of silane coupling agent on the DE surfaces required to obtain superhydrophobicity of the particles was determined and found to be dependent on the particle morphology. In the coatings made from different particles with 2.4 wt% HFIP-TMS, the minimum amounts of treated particles (loadings) for superhydrophobicity was determined with the less dense CelTix DE requiring about 30 wt%, DiaFil DE requiring about 40 wt%, and EcoFlat DE each requiring about 60 wt% loading of treated particles.
Trench formation in <110> silicon for millimeter-wave switching device
NASA Astrophysics Data System (ADS)
Datta, P.; Kumar, Praveen; Nag, Manoj; Bhattacharya, D. K.; Khosla, Y. P.; Dahiya, K. K.; Singh, D. V.; Venkateswaran, R.; Kumar, Devender; Kesavan, R.
1999-11-01
Anisotropic etching using alkaline solution has been adopted to form trenches in silicon while fabricating surface oriented bulk window SPST switches. An array pattern has been etched on silicon with good control on depth of trenches. KOH-water solution is seen to yield a poor surface finish. Use of too much of additive like isopropyl alcohol improves the surface condition but distorts the array pattern due to loss of anisotropy. However, controlled use of this additive during the last phase of trench etching is found to produce trenched arrays with desired depth, improved surface finish and minimum distortion of lateral dimensions.
Cusp anomalous dimension and rotating open strings in AdS/CFT
NASA Astrophysics Data System (ADS)
Espíndola, R.; García, J. Antonio
2018-03-01
In the context of AdS/CFT we provide analytical support for the proposed duality between a Wilson loop with a cusp, the cusp anomalous dimension, and the meson model constructed from a rotating open string with high angular momentum. This duality was previously studied using numerical tools in [1]. Our result implies that the minimum of the profile function of the minimal area surface dual to the Wilson loop, is related to the inverse of the bulk penetration of the dual string that hangs from the quark-anti-quark pair (meson) in the gauge theory.
Calculations of cosmic-ray helium transport in shielding materials
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
1993-01-01
The transport of galactic cosmic-ray helium nuclei and their secondaries through bulk shielding is considered using the straight-ahead approximation to the Boltzmann equation. A data base for nuclear interaction cross sections and secondary particle energy spectra for high-energy light-ion breakup is presented. The importance of the light ions H-2, H-3, and He-3 for cosmic-ray risk estimation is discussed, and the estimates of the fractional contribution to the neutron flux from helium interactions compared with other particle interactions are presented using a 1977 solar minimum cosmic-ray spectrum.
The intact capture of hypervelocity dust particles using underdense foams
NASA Technical Reports Server (NTRS)
Maag, Carl R.; Borg, J.; Tanner, William G.; Stevenson, T. J.; Bibring, J.-P.
1994-01-01
The impact of a hypervelocity projectile (greater than 3 km/s) is a process that subjects both the impactor and the impacted material to a large transient pressure distribution. The resultant stresses cause a large degree of fragmentation, melting, vaporization, and ionization (for normal densities). The pressure regime magnitude, however, is directly related to the density relationship between the projectile and target materials. As a consequence, a high-density impactor on a low-density target will experience the lowest level of damage. Historically, there have been three different approaches toward achieving the lowest possible target density. The first employs a projectile impinging on a foil or film of moderate density, but whose thickness is much less than the particle diameter. This results in the particle experiencing a pressure transient with both a short duration and a greatly reduced destructive effect. A succession of these films, spaced to allow nondestructive energy dissipation between impacts, will reduce the impactor's kinetic energy without allowing its internal energy to rise to the point where destruction of the projectile mass will occur. An added advantage to this method is that it yields the possibility of regions within the captured particle where a minimum of thermal modification has taken place. Polymer foams have been employed as the primary method of capturing particles with minimum degradation. The manufacture of extremely low bulk density materials is usually achieved by the introduction of voids into the material base. It must be noted, however, that a foam structure only has a true bulk density of the mixture at sizes much larger than the cell size, since for impact processes this is of paramount importance. The scale at which the bulk density must still be close to that of the mixture is approximately equal to the impactor. When this density criterion is met, shock pressures during impact are minimized, which in turn maximizes the probability of survival for the impacting particle. The primary objectives of the experiment are to (1) Examine the morphology of primary and secondary hypervelocity impact craters. Primary attention will be paid to craters caused by ejecta during hypervelocity impacts of different substrates. (2) Determine the size distribution of ejecta by means of witness plates and collect fragments of ejecta from craters by means of momentum-sensitive mcropore foam. (3) Assess the directionality of the flux by means of penetration-hole alignment of thin films placed above the cells. (4) Capture intact the particles that perforated the thin film and entered the cell. Capture media consisted of both previously flight-tested micropore foams and aerogel. The foams had different latent heats of fusion and, accordingly, will capture particles over a range of momenta. Aerogel was incorporated into the cells to determine the minimum diameter than can be captured intact.
Growth and analysis of gallium arsenide-gallium antimonide single and two-phase nanoparticles
NASA Astrophysics Data System (ADS)
Schamp, Crispin T.
When evaluating the path of phase transformations in systems with nanoscopic dimensions one often relies on bulk phase diagrams for guidance because of the lack of phase diagrams that show the effect of particle size. The GaAs-GaSb pseudo-binary alloy is chosen for study to gain insight into the size dependence of solid-solubility in a two-phase system. To this end, a study is performed using independent laser ablation of high purity targets of GaAs and GaSb. The resultant samples are analyzed by transmission electron microscopy. Experimental results indicate that GaAs-GaSb nanoparticles have been formed with compositions that lie within the miscibility gap of bulk GaAs-GaSb. An unusual nanoparticle morpohology resembling the appearance of ice cream cones has been observed in single component experiments. These particles are composed of a spherical cap of Ga in contact with a crystalline cone of either GaAs or GaSb. The cones take the projected 2-D shape of a triangle or a faceted gem. The liquid Ga is found to consistently be of spherical shape and wets to the widest corners of the cone, suggesting an energy minimum exists at that wetting condition. To explore this observation a liquid sphere is modeled as being penetrated by a solid gem. The surface energies of the solid and liquid, and interfacial energy are summed as a function of penetration depth, with the sum showing a cusped minimum at the penetration depth corresponding to the waist of the gem. The angle of contact of the liquid wetting the cone is also calculated, and Young's contact angle is found to occur when the derivative of the total energy with respect to penetration depth is zero, which can be a maximum or a minimum depending on the geometrical details. The spill-over of the meniscus across the gem corners is found to be energetically favorable when the contact angle achieves the value of the equilibrium angle; otherwise the meniscus is pinned at the corners.
Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen
2018-01-31
It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.
2018-01-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J
2018-02-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.
A Comparison of Methods for Modeling Geochemical Variability in the Earth's Mantle
NASA Astrophysics Data System (ADS)
Kellogg, J. B.; Tackley, P. J.
2004-12-01
Numerial models of isotopic and chemical heterogeneity of the Earth's mantle fall into three categories, in decreasing order of computational demand. First, several authors have used chemical tracers within a full thermo-chemical convection calculation (e.g., Christensen and Hofmann, 1994, van Keken and Ballentine, 1999; Xie and Tackley, 2004). Second, Kellogg et al. (2002) proposed an extension of the traditional geochemical box model calculations in which numerous subreservoirs were tracked within the bulk depleted mantle reservoir. Third, Allègre and Lewin (1995) described a framework in which the variance in chemical and isotopic ratios were treated as quantities intrinsic to the bulk reservoirs, complete with sources and sinks. Results from these three methods vary, particularly with respect to conclusions drawn about the meaning of the Pb-Pb pseudo-isochron. We revisit these methods in an attempt to arrive at a common understanding. By considering all three we better identify the strengths and weaknesses of each approach and allow each to inform the other. Finally, we present results from a new hybrid model that combines the complexity and regional-scale variability of the thermochemical convection models with the short length-scale sensitivity of the Kellogg et al. approach.
Prabhakar, A H; Patel, V B; Giridhar, R
1999-07-01
Two new rapid, sensitive and economical spectrophotometric methods are described for the determination of fluoxetine hydrochloride in bulk and in pharmaceutical formulations. Both methods are based on the formation of a yellow ion-pair complex due to the action of methyl orange (MO) and thymol blue (TM) on fluoxetine in acidic (pH 4.0) and basic (pH 8.0) medium, respectively. Under optimised conditions they show an absorption maxima at 433 nm (MO) and 410 nm (TB), with molar absorptivities of 2.12 x 10(-4) and 4.207 x 10(-3) l mol(-1) cm(-1) and Sandell's Sensitivities of 1.64 x 10(-2) and 0.082 microg cm(-2) per 0.001 absorbance unit for MO and TB, respectively. The colour is stable for 5 min after extraction. In both cases Beer's Law is obeyed at 1-20 microg mol(-1) with MO and 4-24 microg mol(-1) with TB. The proposal method was successfully extended to pharmaceutical preparations capsules. The results obtained by both the agreement and E.P. (3rd edition) were in good agreement and statistical comparison by Student's t-test and variance ratio F-test showed no significant difference in the three methods.
Li-Doped Ionic Liquid Electrolytes: From Bulk Phase to Interfacial Behavior
NASA Technical Reports Server (NTRS)
Haskins, Justin B.; Lawson, John W.
2016-01-01
Ionic liquids have been proposed as candidate electrolytes for high-energy density, rechargeable batteries. We present an extensive computational analysis supported by experimental comparisons of the bulk and interfacial properties of a representative set of these electrolytes as a function of Li-salt doping. We begin by investigating the bulk electrolyte using quantum chemistry and ab initio molecular dynamics to elucidate the solvation structure of Li(+). MD simulations using the polarizable force field of Borodin and coworkers were then performed, from which we obtain an array of thermodynamic and transport properties. Excellent agreement is found with experiments for diffusion, ionic conductivity, and viscosity. Combining MD simulations with electronic structure computations, we computed the electrochemical window of the electrolytes across a range of Li(+)-doping levels and comment on the role of the liquid environment. Finally, we performed a suite of simulations of these Li-doped electrolytes at ideal electrified interfaces to evaluate the differential capacitance and the equilibrium Li(+) distribution in the double layer. The magnitude of differential capacitance is in good agreement with our experiments and exhibits the characteristic camel-shaped profile. In addition, the simulations reveal Li(+) to be highly localized to the second molecular layer of the double layer, which is supported by additional computations that find this layer to be a free energy minimum with respect to Li(+) translation.
Confined disordered strictly jammed binary sphere packings
NASA Astrophysics Data System (ADS)
Chen, D.; Torquato, S.
2015-12-01
Disordered jammed packings under confinement have received considerably less attention than their bulk counterparts and yet arise in a variety of practical situations. In this work, we study binary sphere packings that are confined between two parallel hard planes and generalize the Torquato-Jiao (TJ) sequential linear programming algorithm [Phys. Rev. E 82, 061302 (2010), 10.1103/PhysRevE.82.061302] to obtain putative maximally random jammed (MRJ) packings that are exactly isostatic with high fidelity over a large range of plane separation distances H , small to large sphere radius ratio α , and small sphere relative concentration x . We find that packing characteristics can be substantially different from their bulk analogs, which is due to what we term "confinement frustration." Rattlers in confined packings are generally more prevalent than those in their bulk counterparts. We observe that packing fraction, rattler fraction, and degree of disorder of MRJ packings generally increase with H , though exceptions exist. Discontinuities in the packing characteristics as H varies in the vicinity of certain values of H are due to associated discontinuous transitions between different jammed states. When the plane separation distance is on the order of two large-sphere diameters or less, the packings exhibit salient two-dimensional features; when the plane separation distance exceeds about 30 large-sphere diameters, the packings approach three-dimensional bulk packings. As the size contrast increases (as α decreases), the rattler fraction dramatically increases due to what we call "size-disparity" frustration. We find that at intermediate α and when x is about 0.5 (50-50 mixture), the disorder of packings is maximized, as measured by an order metric ψ that is based on the number density fluctuations in the direction perpendicular to the hard walls. We also apply the local volume-fraction variance στ2(R ) to characterize confined packings and find that these packings possess essentially the same level of hyperuniformity as their bulk counterparts. Our findings are generally relevant to confined packings that arise in biology (e.g., structural color in birds and insects) and may have implications for the creation of high-density powders and improved battery designs.
Sleep and nutritional deprivation and performance of house officers.
Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R
1985-07-01
A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
A Multipath Mitigation Algorithm for vehicle with Smart Antenna
NASA Astrophysics Data System (ADS)
Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang
2018-01-01
In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.
NASA Technical Reports Server (NTRS)
Grappin, R.; Velli, M.
1995-01-01
The solar wind is not an isotropic medium; two symmetry axis are provided, first the radial direction (because the mean wind is radial) and second the spiral direction of the mean magnetic field, which depends on heliocentric distance. Observations show very different anisotropy directions, depending on the frequency waveband; while the large-scale velocity fluctuations are essentially radial, the smaller scale magnetic field fluctuations are mostly perpendicular to the mean field direction, which is not the expected linear (WkB) result. We attempt to explain how these properties are related, with the help of numerical simulations.
Upgrading of automobile shredder residue via innovative granulation process 'ReGran'.
Holthaus, Philip; Kappes, Moritz; Krumm, Wolfgang
2017-01-01
Stricter regulatory requirements concerning end-of-life vehicles and rising disposal costs necessitate new ways for automobile shredder residue utilisation. The shredder granulate and fibres, produced by the VW-SICON-Process, have a high energy content of more than 20 MJ kg -1 , which makes energy recovery an interesting possibility. Shredder fibres have a low bulk density of 60 kg m -3 , which prevents efficient storing and utilisation as a refuse-derived fuel. By mixing fibres with plastic-rich shredder granulate and heating the mixture, defined granules can be produced. With this 'ReGran' process, the bulk density can be enhanced by a factor of seven by embedding shredder fibres in the partially melted plastic mass. A minimum of 26-33 wt% granulate is necessary to create enough melted plastic. The process temperature should be between 240 °C and 250 °C to assure fast melting while preventing extensive outgassing. A rotational frequency of the mixing tool of 1000 r min -1 during heating and mixing ensures a homogenous composition of the granules. During cooling, lower rotational frequencies generate bigger granules with particles sizes of up to 60 mm at 300 r min -1 . To keep outgassing to a minimum, it is suggested to melt shredder granulate first and then add shredder fibres. Adding coal, wood or tyre fluff as a third component reduces chlorine levels to less than 1 wt%. The best results can be achieved with tyre fluff. In combination with the VW-SICON-Process, ReGran produces a solid recovered fuel or 'design fuel' tailored to the requirements of specific thermal processes.
The Role of Porosity in the Formation of Coastal Boulder Deposits - Hurricane Versus Tsunami
NASA Astrophysics Data System (ADS)
Spiske, M.; Boeroecz, Z.; Bahlburg, H.
2007-12-01
Coastal boulder deposits are a consequence of high-energy wave impacts, such as storms, hurricanes or tsunami. Distinguishing parameters between storm, hurricane and tsunami origin are distance of a deposit from the coast, boulder weight and inferred wave height. Formulas to calculate minimum wave heights of both storm and tsunami waves depend on accurate determination of boulder dimensions and lithology from the respective deposits. At present however, boulder porosity appears to be commonly neglected, leading to significant errors in determined bulk density, especially when boulders consist of reef or coral limestone. This limits precise calculations of wave heights and hampers a clear distinction between storm, hurricane and tsunami origin. Our study uses Archimedean and optical 3D-profilometry measurements for the determination of porosities and bulk densities of reef and coral limestone boulders from the islands of Aruba, Bonaire and Curaçao (ABC Islands, Netherlands Antilles). Due to the high porosities (up to 68 %) of the enclosed coral species, the weights of the reef rock boulders are as low as 20 % of previously calculated values. Hence minimum calculated heights both for tsunami and hurricane waves are smaller than previously proposed. We show that hurricane action appears to be the likely depositional mechanism for boulders on the ABC Islands, since 1) our calculations result in tsunami wave heights which do not permit the overtopping of coastal platforms on the ABC Islands, 2) boulder fields lie on the windward (eastern) sides of the islands, 3) recent hurricanes transported boulders up to 35 m3 and 4) the scarcity of tsunami events affecting the coasts of the ABC Islands compared to frequent impacts of tropical storms and hurricanes.
Duling, Matthew G.; LeBouf, Ryan F.; Cox-Ganser, Jean M.; Kreiss, Kathleen; Martin, Stephen B.; Bailey, Rachel L.
2018-01-01
Obliterative bronchiolitis in five former coffee processing employees at a single workplace prompted an exposure study of current workers. Exposure characterization was performed by observing processes, assessing the ventilation system and pressure relationships, analyzing headspace of flavoring samples, and collecting and analyzing personal breathing zone and area air samples for diacetyl and 2,3-pentanedione vapors and total inhalable dust by work area and job title. Mean airborne concentrations were calculated using the minimum variance unbiased estimator of the arithmetic mean. Workers in the grinding/packaging area for unflavored coffee had the highest mean diacetyl exposures, with personal concentrations averaging 93 parts per billion (ppb). This area was under positive pressure with respect to flavored coffee production (mean personal diacetyl levels of 80 ppb). The 2,3-pentanedione exposures were highest in the flavoring room with mean personal exposures of 122 ppb, followed by exposures in the unflavored coffee grinding/packaging area (53 ppb). Peak 15-min airborne concentrations of 14,300 ppb diacetyl and 13,800 ppb 2,3-pentanedione were measured at a small open hatch in the lid of a hopper containing ground unflavored coffee on the mezzanine over the grinding/packaging area. Three out of the four bulk coffee flavorings tested had at least a factor of two higher 2,3-pentanedione than diacetyl headspace measurements. At a coffee processing facility producing both unflavored and flavored coffee, we found the grinding and packaging of unflavored coffee generate simultaneous exposures to diacetyl and 2,3-pentanedione that were well in excess of the NIOSH proposed RELs and similar in magnitude to those in the areas using a flavoring substitute for diacetyl. These findings require physicians to be alert for obliterative bronchiolitis and employers, government, and public health consultants to assess the similarities and differences across the industry to motivate preventive intervention where indicated by exposures above the proposed RELs for diacetyl and 2,3-pentanedione. PMID:27105025
Differential effects of fine root morphology on water dynamics in the root-soil interface
NASA Astrophysics Data System (ADS)
DeCarlo, K. F.; Bilheux, H.; Warren, J.
2017-12-01
Soil water uptake form plants, particularly in the rhizosphere, is a poorly understood question in the plant and soil sciences. Our study analyzed the role of belowground plant morphology on soil structural and water dynamics of 5 different plant species (juniper, grape, maize, poplar, maple), grown in sandy soils. Of these, the poplar system was extended to capture drying dynamics. Neutron radiography was used to characterize in-situ dynamics of the soil-water-plant system. A joint map of root morphology and soil moisture was created for the plant systems using digital image processing, where soil pixels were connected to associated root structures via minimum distance transforms. Results show interspecies emergent behavior - a sigmoidal relationship was observed between root diameter and bulk/rhizosphere soil water content difference. Extending this as a proxy for extent of rhizosphere development with root age, we observed a logistic growth pattern for the rhizosphere: minimal development in the early stages is superceded by rapid onset of rhizosphere formation, which then stabilizes/decays with the likely root suberization. Dynamics analysis of water content differences between the root/rhizosphere, and rhizosphere/bulk soil interface highlight the persistently higher water content in the root at all water content and root size ranges. At the rhizosphere/bulk soil interface, we observe a shift in soil water dynamics by root size: in super fine roots, we observe that water content is primarily lower in the rhizosphere under wetter conditions, which then gradually increases to a relatively higher water content under drier conditions. This shifts to a persistently higher rhizosphere water content relative to bulk soil in both wet/dry conditions with increased root size, suggesting that, by size, the finest root structures may contribute the most to total soil water uptake in plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omar, M.S., E-mail: dr_m_s_omar@yahoo.com
2012-11-15
Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less
Thermal conductivity engineering of bulk and one-dimensional Si-Ge nanoarchitectures.
Kandemir, Ali; Ozden, Ayberk; Cagin, Tahir; Sevik, Cem
2017-01-01
Various theoretical and experimental methods are utilized to investigate the thermal conductivity of nanostructured materials; this is a critical parameter to increase performance of thermoelectric devices. Among these methods, equilibrium molecular dynamics (EMD) is an accurate technique to predict lattice thermal conductivity. In this study, by means of systematic EMD simulations, thermal conductivity of bulk Si-Ge structures (pristine, alloy and superlattice) and their nanostructured one dimensional forms with square and circular cross-section geometries (asymmetric and symmetric) are calculated for different crystallographic directions. A comprehensive temperature analysis is evaluated for selected structures as well. The results show that one-dimensional structures are superior candidates in terms of their low lattice thermal conductivity and thermal conductivity tunability by nanostructuring, such as by diameter modulation, interface roughness, periodicity and number of interfaces. We find that thermal conductivity decreases with smaller diameters or cross section areas. Furthermore, interface roughness decreases thermal conductivity with a profound impact. Moreover, we predicted that there is a specific periodicity that gives minimum thermal conductivity in symmetric superlattice structures. The decreasing thermal conductivity is due to the reducing phonon movement in the system due to the effect of the number of interfaces that determine regimes of ballistic and wave transport phenomena. In some nanostructures, such as nanowire superlattices, thermal conductivity of the Si/Ge system can be reduced to nearly twice that of an amorphous silicon thermal conductivity. Additionally, it is found that one crystal orientation, [Formula: see text]100[Formula: see text], is better than the [Formula: see text]111[Formula: see text] crystal orientation in one-dimensional and bulk SiGe systems. Our results clearly point out the importance of lattice thermal conductivity engineering in bulk and nanostructures to produce high-performance thermoelectric materials.
NASA Astrophysics Data System (ADS)
Kioussis, Nicholas
The InAs/GaSb and InAs/InAsSb type-II strain-layer superlattices (T2SLS) are of great importance and show great promise for mid-wave and long-wave infrared (IR) detectors for a variety of civil and military applications. The T2SLS offer several advantages over present day detection technologies including suppressed Auger recombination relative to the bulk MCT material, high quantum efficiencies, and commercial availability of low defect density substrates. While the T2SLS detectors are approaching the empirical Rule-07 benchmark of MCT's performance level, the dark-current density is still significantly higher than that of bulk MCT detectors. One of the major origins of dark current is associated with the Shockley-Read- Hall (SRH) process in the depletion region of the detector. I will present results of ab initio electronic structure calculations of the stability of a wide range of point defects [As and In vacancies, In, As and Sb antisites, In interstitials, As interstitials, and Sb interstitials] in various charged states in bulk InAs, InSb, and InAsSb systems and T2SLS. I will also present results of the transition energy levels. The calculations reveal that compared to defects in bulk materials, the formation and defect properties in InAs/InAsSb T2SLS can be affected by various structural features, such as strain, interface, and local chemical environment. I will present examples where the effect of strain or local chemical environment shifts the transition energy levels of certain point defects either above or below the conduction band minimum, thus suppressing their contribution to the SRH recombination.
Biophysical constraints on leaf expansion in a tall conifer.
Meinzer, Frederick C; Bond, Barbara J; Karanian, Jennifer A
2008-02-01
The physiological mechanisms responsible for reduced extension growth as trees increase in height remain elusive. We evaluated biophysical constraints on leaf expansion in old-growth Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees. Needle elongation rates, plastic and elastic extensibility, bulk leaf water (Psi(L)) and osmotic (Psi(pi)) potential, bulk tissue yield threshold and final needle length were characterized along a height gradient in crowns of > 50-m-tall trees during the period between bud break and full expansion (May to June). Although needle length decreased with increasing height, there was no height-related trend in leaf plastic extensibility, which was highest immediately after bud break (2.9%) and declined rapidly to a stable minimum value (0.3%) over a 3-week period during which leaf expansion was completed. There was a significant positive linear relationship between needle elongation rates and plastic extensibility. Yield thresholds were consistently lower at the upper and middle crown sampling heights. The mean yield threshold across all sampling heights was 0.12 +/- 0.03 MPa on June 8, rising to 0.34 +/- 0.03 MPa on June 15 and 0.45 +/- 0.05 MPa on June 24. Bulk leaf Psi(pi) decreased linearly with increasing height at a rate of 0.004 MPa m(-1) during the period of most rapid needle elongation, but the vertical osmotic gradient was not sufficient to fully compensate for the 0.015 MPa m(-1) vertical gradient in Psi(L), implying that bulk leaf turgor declined at a rate of about 0.011 MPa m(-1) increase in height. Although height-dependent reductions in turgor appeared to constrain leaf expansion, it is possible that the impact of reduced turgor was mitigated by delayed phenological development with increasing height, which resulted in an increase with height in the temperature during leaf expansion.
Constraining the Bulk Density of 10m-Class Near-Earth Asteroid 2012 LA
NASA Astrophysics Data System (ADS)
Mommert, Michael; Hora, Joseph; Farnocchia, Davide; Trilling, David; Chesley, Steve; Harris, Alan; Mueller, Migo; Smith, Howard
2016-08-01
The physical properties of near-Earth asteroids (NEAs) provide important hints on their origin, as well as their past physical and orbital evolution. Recent observations seem to indicate that small asteroids are different than expected: instead of being monolithic bodies, some of them instead resemble loose conglomerates of smaller rocks, so called 'rubble piles'. This is surprising, since self-gravitation is practically absent in these bodies. Hence, bulk density measurements of small asteroids, from which their internal structure can be estimated, provide unique constraints on asteroid physical models, as well as models for asteroid evolution. We propose Spitzer Space Telescope observations of 10 m-sized NEA 2012 LA, which will allow us to constrain the diameter, albedo, bulk density, macroporosity, and mass of this object. We require 30 hrs of Spitzer time to detect our target with a minimum SNR of 3 in CH2. In order to interpret our observational results, we will use the same analysis technique that we used in our successful observations and analyses of tiny asteroids 2011 MD and 2009 BD. Our science goal, which is the derivation of the target's bulk density and its internal structure, can only be met with Spitzer. Our observations will produce only the third comprehensive physical characterization of an asteroid in the 10m size range (all of which have been carried out by our team, using Spitzer). Knowledge of the physical properties of small NEAs, some of which pose an impact threat to the Earth, is of importance for understanding their evolution and estimating the potential of destruction in case of an impact, as well as for potential manned missions to NEAs for either research or potential commercial uses.
Thermal conductivity engineering of bulk and one-dimensional Si-Ge nanoarchitectures
Kandemir, Ali; Ozden, Ayberk; Cagin, Tahir; Sevik, Cem
2017-01-01
Various theoretical and experimental methods are utilized to investigate the thermal conductivity of nanostructured materials; this is a critical parameter to increase performance of thermoelectric devices. Among these methods, equilibrium molecular dynamics (EMD) is an accurate technique to predict lattice thermal conductivity. In this study, by means of systematic EMD simulations, thermal conductivity of bulk Si-Ge structures (pristine, alloy and superlattice) and their nanostructured one dimensional forms with square and circular cross-section geometries (asymmetric and symmetric) are calculated for different crystallographic directions. A comprehensive temperature analysis is evaluated for selected structures as well. The results show that one-dimensional structures are superior candidates in terms of their low lattice thermal conductivity and thermal conductivity tunability by nanostructuring, such as by diameter modulation, interface roughness, periodicity and number of interfaces. We find that thermal conductivity decreases with smaller diameters or cross section areas. Furthermore, interface roughness decreases thermal conductivity with a profound impact. Moreover, we predicted that there is a specific periodicity that gives minimum thermal conductivity in symmetric superlattice structures. The decreasing thermal conductivity is due to the reducing phonon movement in the system due to the effect of the number of interfaces that determine regimes of ballistic and wave transport phenomena. In some nanostructures, such as nanowire superlattices, thermal conductivity of the Si/Ge system can be reduced to nearly twice that of an amorphous silicon thermal conductivity. Additionally, it is found that one crystal orientation, <100>, is better than the <111> crystal orientation in one-dimensional and bulk SiGe systems. Our results clearly point out the importance of lattice thermal conductivity engineering in bulk and nanostructures to produce high-performance thermoelectric materials. PMID:28469733
NASA Astrophysics Data System (ADS)
Núñez, Sara; López, José M.; Aguado, Andrés
2012-09-01
We report the putative Global Minimum (GM) structures and electronic properties of GaN+, GaN and GaN- clusters with N = 13-37 atoms, obtained from first-principles density functional theory structural optimizations. The calculations include spin polarization and employ an exchange-correlation functional which accounts for van der Waals dispersion interactions (vdW-DFT). We find a wide diversity of structural motifs within the located GM, including decahedral, polyicosahedral, polytetrahedral and layered structures. The GM structures are also extremely sensitive to the number of electrons in the cluster, so that the structures of neutral and charged clusters differ for most sizes. The main magic numbers (clusters with an enhanced stability) are identified and interpreted in terms of electronic and geometric shell closings. The theoretical results are consistent with experimental abundance mass spectra of GaN+ and with photoelectron spectra of GaN-. The size dependence of the latent heats of melting, the shape of the heat capacity peaks, and the temperature dependence of the collision cross-sections, all measured for GaN+ clusters, are properly interpreted in terms of the calculated cohesive energies, spectra of configurational excitations, and cluster shapes, respectively. The transition from ``non-melter'' to ``magic-melter'' behaviour, experimentally observed between Ga30+ and Ga31+, is traced back to a strong geometry change. Finally, the higher-than-bulk melting temperatures of gallium clusters are correlated with a more typically metallic behaviour of the clusters as compared to the bulk, contrary to previous theoretical claims.We report the putative Global Minimum (GM) structures and electronic properties of GaN+, GaN and GaN- clusters with N = 13-37 atoms, obtained from first-principles density functional theory structural optimizations. The calculations include spin polarization and employ an exchange-correlation functional which accounts for van der Waals dispersion interactions (vdW-DFT). We find a wide diversity of structural motifs within the located GM, including decahedral, polyicosahedral, polytetrahedral and layered structures. The GM structures are also extremely sensitive to the number of electrons in the cluster, so that the structures of neutral and charged clusters differ for most sizes. The main magic numbers (clusters with an enhanced stability) are identified and interpreted in terms of electronic and geometric shell closings. The theoretical results are consistent with experimental abundance mass spectra of GaN+ and with photoelectron spectra of GaN-. The size dependence of the latent heats of melting, the shape of the heat capacity peaks, and the temperature dependence of the collision cross-sections, all measured for GaN+ clusters, are properly interpreted in terms of the calculated cohesive energies, spectra of configurational excitations, and cluster shapes, respectively. The transition from ``non-melter'' to ``magic-melter'' behaviour, experimentally observed between Ga30+ and Ga31+, is traced back to a strong geometry change. Finally, the higher-than-bulk melting temperatures of gallium clusters are correlated with a more typically metallic behaviour of the clusters as compared to the bulk, contrary to previous theoretical claims. Electronic supplementary information (ESI) available: Atomic coordinates (in xyz format and Å units) and point group symmetries for the global minimum structures reported in this paper. See DOI: 10.1039/c2nr31222k
Schenker, Gabriela; Lenz, Armando; Körner, Christian; Hoch, Günter
2014-03-01
Temperature is the most important factor driving the cold edge distribution limit of temperate trees. Here, we identified the minimum temperatures for root growth in seven broad-leaved tree species, compared them with the species' natural elevational limits and identified morphological changes in roots produced near their physiological cold limit. Seedlings were exposed to a vertical soil-temperature gradient from 20 to 2 °C along the rooting zone for 18 weeks. In all species, the bulk of roots was produced at temperatures above 5 °C. However, the absolute minimum temperatures for root growth differed among species between 2.3 and 4.2 °C, with those species that reach their natural distribution limits at higher elevations also tending to have lower thermal limits for root tissue formation. In all investigated species, the roots produced at temperatures close to the thermal limit were pale, thick, unbranched and of reduced mechanical strength. Across species, the specific root length (m g(-1) root) was reduced by, on average, 60% at temperatures below 7 °C. A significant correlation of minimum temperatures for root growth with the natural high elevation limits of the investigated species indicates species-specific thermal requirements for basic physiological processes. Although these limits are not necessarily directly causative for the upper distribution limit of a species, they seem to belong to a syndrome of adaptive processes for life at low temperatures. The anatomical changes at the cold limit likely hint at the mechanisms impeding meristematic activity at low temperatures.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Hossain, Md Golam; Saw, Aik; Alam, Rashidul; Ohtsuki, Fumio; Kamarul, Tunku
2013-09-01
Cephalic index (CI), the ratio of head breadth to head length, is widely used to categorise human populations. The aim of this study was to access the impact of anthropometric measurements on the CI of male Japanese university students. This study included 1,215 male university students from Tokyo and Kyoto, selected using convenient sampling. Multiple regression analysis was used to determine the effect of anthropometric measurements on CI. The variance inflation factor (VIF) showed no evidence of a multicollinearity problem among independent variables. The coefficients of the regression line demonstrated a significant positive relationship between CI and minimum frontal breadth (p < 0.01), bizygomatic breadth (p < 0.01) and head height (p < 0.05), and a negative relationship between CI and morphological facial height (p < 0.01) and head circumference (p < 0.01). Moreover, the coefficient and odds ratio of logistic regression analysis showed a greater likelihood for minimum frontal breadth (p < 0.01) and bizygomatic breadth (p < 0.01) to predict round-headedness, and morphological facial height (p < 0.05) and head circumference (p < 0.01) to predict long-headedness. Stepwise regression analysis revealed bizygomatic breadth, head circumference, minimum frontal breadth, head height and morphological facial height to be the best predictor craniofacial measurements with respect to CI. The results suggest that most of the variables considered in this study appear to influence the CI of adult male Japanese students.
Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem
Schneider, Davod C.; Piatt, John F.
1986-01-01
The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.
Solar Control of Earth's Ionosphere: Observations from Solar Cycle 23
NASA Astrophysics Data System (ADS)
Doe, R. A.; Thayer, J. P.; Solomon, S. C.
2005-05-01
A nine year database of sunlit E-region electron density altitude profiles (Ne(z)) measured by the Sondrestrom ISR has been partitioned over a 30-bin parameter space of averaged 10.7 cm solar radio flux (F10.7) and solar zenith angle (χ) to investigate long-term solar and thermospheric variability, and to validate contemporary EUV photoionization models. A two stage filter, based on rejection of Ne(z) profiles with large Hall to Pedersen ratio, is used to minimize auroral contamination. Resultant filtered mean Ne(z) compares favorably with subauroral Ne measured for the same F10.7 and χ conditions at the Millstone Hill ISR. Mean Ne, as expected, increases with solar activity and decreases with large χ, and the variance around mean Ne is shown to be greatest at low F10.7 (solar minimum). ISR-derived mean Ne is compared with two EUV models: (1) a simple model without photoelectrons and based on the 5 -- 105 nm EUVAC model solar flux [Richards et al., 1994] and (2) the GLOW model [Solomon et al., 1988; Solomon and Abreu, 1989] suitably modified for inclusion of XUV spectral components and photoelectron flux. Across parameter space and for all altitudes, Model 2 provides a closer match to ISR mean Ne and suggests that the photoelectron and XUV enhancements are essential to replicate measured plasma densities below 150 km. Simulated Ne variance envelopes, given by perturbing the Model 2 neutral atmosphere input by the measured extremum in Ap, F10.7, and Te, are much narrower than ISR-derived geophysical variance envelopes. We thus conclude that long-term variability of the EUV spectra dominates over thermospheric variability and that EUV spectral variability is greatest at solar minimum. ISR -- model comparison also provides evidence for the emergence of an H (Lyman β) Ne feature at solar maximum. Richards, P. G., J. A. Fennelly, and D. G. Torr, EUVAC: A solar EUV flux model for aeronomic calculations, J. Geophys. Res., 99, 8981, 1994. Solomon, S. C., P. B. Hays, and V. J. Abreu, The auroral 6300 Å emission: Observations and Modeling, J. Geophys. Res., 93, 9867, 1988. Solomon, S. C. and V. J. Abreu, The 630 nm dayglow, J. Geophys. Res., 94, 6817, 1989.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Electron Pitch-Angle Distribution in Pressure Balance Structures Measured by Ulysses/SWOOPS
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Six, N. Frank (Technical Monitor)
2002-01-01
Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. From previous studies, PBSs are believed to be remnants of coronal plumes. Yamauchi et al [2002] investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. They found that PBSs contain structures like current sheets or plasmoids, and suggested that PBSs are associated with network activity such as magnetic reconnection in the photosphere at the base of polar plumes. We have investigated energetic electron data from Ulysses/SWOOPS to see whether bi-directional electron flow exists and we have found evidence supporting the earlier conclusions. We find that 45 ot of 53 PBSs show local bi-directional or isotopic electron flux or flux associated with current-sheet structure. Only five events show the pitch-angle distribution expected for Alfvenic fluctuations. We conclude that PBSs do contain magnetic structures such as current sheets or plasmoids that are expected as a result of network activity at the base of polar plumes.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-01-01
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064
Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.
Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya
2013-08-20
It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.
Sun, Xiang-ping; Lu, Peng; Jiang, Tao; Schuchardt, Frank; Li, Guo-xue
2014-01-01
Mismanagement of the composting process can result in emissions of CH4, N2O, and NH3, which have caused severe environmental problems. This study was aimed at determining whether CH4, N2O, and NH3 emissions from composting are affected by bulking agents during rapid composting of pig manure from the Chinese Ganqinfen system. Three bulking agents, corn stalks, spent mushroom compost, and sawdust, were used in composting with pig manure in 60 L reactors with forced aeration for more than a month. Gas emissions were measured continuously, and detailed gas emission patterns were obtained. Concentrations of NH3 and N2O from the composting pig manure mixed with corn stalks or sawdust were higher than those from the spent mushroom compost treatment, especially the sawdust treatment, which had the highest total nitrogen loss among the three runs. Most of the nitrogen was lost in the form of NH3, which accounts for 11.16% to 35.69% of the initial nitrogen. One-way analysis of variance for NH3 emission showed no significant differences between the corn stalk and sawdust treatments, but a significant difference was noted between the spent mushroom compost and sawdust treatments. The introduction of sawdust reduced CH4 emission more than the corn stalks and spent mushroom compost. However, there were no significant differences among the three runs for total carbon loss. All treatments were matured after 30 d. PMID:24711356
Data analysis on physical and mechanical properties of cassava pellets.
Oguntunde, Pelumi E; Adejumo, Oluyemisi A; Odetunmibi, Oluwole A; Okagbue, Hilary I; Adejumo, Adebowale O
2018-02-01
In this data article, laboratory experimental investigation results carried out at National Centre for Agricultural Mechanization (NCAM) on moisture content, machine speed, die diameter of the rig, and the outputs (hardness, durability, bulk density, and unit density of the pellets) at different levels of cassava pellets were observed. Analysis of variance using randomized complete block design with factorial was used to perform analysis for each of the outputs: hardness, durability, bulk density, and unit density of the pellets. A clear description on each of these outputs was considered separately using tables and figures. It was observed that for all the output with the exception of unit density, their main factor effects as well as two and three ways interactions is significant at 5% level. This means that the hardness, bulk density and durability of cassava pellets respectively depend on the moisture content of the cassava dough, the machine speed, the die diameter of the extrusion rig and the combinations of these factors in pairs as well as the three altogether. Higher machine speeds produced more quality pellets at lower die diameters while lower machine speed is recommended for higher die diameter. Also the unit density depends on die diameter and the three-way interaction only. Unit density of cassava pellets is neither affected by machine parameters nor moisture content of the cassava dough. Moisture content of cassava dough, speed of the machine and die diameter of the extrusion rig are significant factors to be considered in pelletizing cassava to produce pellets. Increase in moisture content of cassava dough increase the quality of cassava pellets.
NASA Astrophysics Data System (ADS)
Yang, Yan; Feng, Zhong-Ying; Zhang, Jian-Min
2018-05-01
The spin-polarized first-principles are used to study the surface thermodynamic stability, electronic and magnetic properties in various (001) surfaces of Zr2CoSn Heusler alloy, and the bulk Zr2CoSn Heusler alloy are also discussed to make comparison. The conduction band minimum (CBM) of half-metallic (HM) bulk Zr2CoSn alloy is contributed by ZrA, ZrB and Co atoms, while the valence band maximum (VBM) is contributed by ZrB and Co atoms. The SnSn termination is the most stable surface with the highest spin polarizations P = 77.1% among the CoCo, ZrCo, ZrZr, ZrSn and SnSn terminations of the Zr2CoSn (001) surface. In the SnSn termination of the Zr2CoSn (001) surface, the atomic partial density of states (APDOS) of atoms in the surface, subsurface and third layers are much influenced by the surface effect and the total magnetic moment (TMM) is mainly contributed by the atomic magnetic moments of atoms in fourth to ninth layers.
Heat of capillary condensation in nanopores: new insights from the equation of state.
Tan, Sugata P; Piri, Mohammad
2017-02-15
Perturbed-Chain Statistical Associating Fluid Theory (PC-SAFT) coupled with the Young-Laplace equation is a recently developed equation of state (EOS) that successfully presents not only the capillary condensation but also the pore critical phenomena. The development of this new EOS allows further investigation of the heats involved in condensation. Compared to the conventional approaches, the EOS calculations present the temperature-dependent behavior of the heat of capillary condensation as well as that of the contributing effects. The confinement effect was found to be the strongest at the pore critical point. Therefore, contrary to the bulk heat condensation that vanishes at the critical point, the heat of capillary condensation in small pores shows a minimum and then increases with temperature when approaching the pore critical temperature. Strong support for the existence of the pore critical point is also discussed as the volume expansivity of the condensed phase in confinement was found to increase dramatically near the pore critical temperature. At high reduced temperatures, the Clausius-Clapeyron equation was found to apply better for confined fluids than it does for bulk fluids.
Kuzmin, Dmitry A.; Bychkov, Igor V.; Shavrov, Vladimir G.; Kotov, Leonid N.
2016-01-01
Transverse-electric (TE) surface plasmons (SPs) are very unusual for plasmonics phenomenon. Graphene proposes a unique possibility to observe these plasmons. Due to transverse motion of carriers, TE SPs speed is usually close to bulk light one. In this work we discuss conditions of TE SPs propagation in cylindrical graphene-based waveguides. We found that the negativity of graphene conductivity’s imaginary part is not a sufficient condition. The structure supports TE SPs when the core radius of waveguide is larger than the critical value Rcr. Critical radius depends on the light frequency and the difference of permittivities inside and outside the waveguide. Minimum value of Rcr is comparable with the wavelength of volume wave and corresponds to interband carriers transition in graphene. We predict that use of multilayer graphene will lead to decrease of critical radius. TE SPs speed may differ more significantly from bulk light one in case of epsilon-near-zero core and shell of the waveguide. Results may open the door for practical applications of TE SPs in optics, including telecommunications. PMID:27225745
Optimization of intermittent microwave–convective drying using response surface methodology
Aghilinategh, Nahid; Rafiee, Shahin; Hosseinpur, Soleiman; Omid, Mahmoud; Mohtasebi, Seyed Saeid
2015-01-01
In this study, response surface methodology was used for optimization of intermittent microwave–convective air drying (IMWC) parameters with employing desirability function. Optimization factors were air temperature (40–80°C), air velocity (1–2 m/sec), pulse ratio) PR ((2–6), and microwave power (200–600 W) while responses were rehydration ratio, bulk density, total phenol content (TPC), color change, and energy consumption. Minimum color change, bulk density, energy consumption, maximum rehydration ratio, and TPC were assumed as criteria for optimizing drying conditions of apple slices in IMWC. The optimum values of process variables were 1.78 m/sec air velocity, 40°C air temperature, PR 4.48, and 600 W microwave power that characterized by maximum desirability function (0.792) using Design expert 8.0. The air temperature and microwave power had significant effect on total responses, but the role of air velocity can be ignored. Generally, the results indicated that it was possible to obtain a higher desirability value if the microwave power and temperature, respectively, increase and decrease. PMID:26286706
Adsorption behaviors of supercritical Lennard-Jones fluid in slit-like pores.
Li, Yingfeng; Cui, Mengqi; Peng, Bo; Qin, Mingde
2018-05-18
Understanding the adsorption behaviors of supercritical fluid in confined space is pivotal for coupling the supercritical technology and the membrane separation technology. Based on grand canonical Monte Carlo simulations, the adsorption behaviors of a Lennard-Jones (LJ) fluid in slit-like pores at reduced temperatures over the critical temperature, T c * = 1.312, are investigated; and impacts of the wall-fluid interactions, the pore width, and the temperature are taken into account. It is found that even if under supercritical conditions, the LJ fluid can undergo a "vapor-liquid phase transition" in confined space, i.e., the adsorption density undergoes a sudden increase with the bulk density. A greater wall-fluid attractive potential, a smaller pore width, and a lower temperature will bring about a stronger confinement effect. Besides, the adsorption pressure reaches a local minimum when the bulk density equals to a certain value, independent of the wall-fluid potential or pore width. The insights in this work have both practical and theoretical significances. Copyright © 2018 Elsevier Inc. All rights reserved.
Head rice rate measurement based on concave point matching
Yao, Yuan; Wu, Wei; Yang, Tianle; Liu, Tao; Chen, Wen; Chen, Chen; Li, Rui; Zhou, Tong; Sun, Chengming; Zhou, Yue; Li, Xinlu
2017-01-01
Head rice rate is an important factor affecting rice quality. In this study, an inflection point detection-based technology was applied to measure the head rice rate by combining a vibrator and a conveyor belt for bulk grain image acquisition. The edge center mode proportion method (ECMP) was applied for concave points matching in which concave matching and separation was performed with collaborative constraint conditions followed by rice length calculation with a minimum enclosing rectangle (MER) to identify the head rice. Finally, the head rice rate was calculated using the sum area of head rice to the overall coverage of rice. Results showed that bulk grain image acquisition can be realized with test equipment, and the accuracy rate of separation of both indica rice and japonica rice exceeded 95%. An increase in the number of rice did not significantly affect ECMP and MER. High accuracy can be ensured with MER to calculate head rice rate by narrowing down its relative error between real values less than 3%. The test results show that the method is reliable as a reference for head rice rate calculation studies. PMID:28128315
Applications of GARCH models to energy commodities
NASA Astrophysics Data System (ADS)
Humphreys, H. Brett
This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.
2013-12-01
A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.
Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Richardson, I. G.; Ling, A. G.
2011-01-01
In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.
Water Content of Lunar Alkali Fedlspar
NASA Technical Reports Server (NTRS)
Mills, R. D.; Simon, J. I.; Wang, J.; Alexander, C. M. O'D.; Hauri, E. H.
2016-01-01
Detection of indigenous hydrogen in a diversity of lunar materials, including volcanic glass, melt inclusions, apatite, and plagioclase suggests water may have played a role in the chemical differentiation of the Moon. Spectroscopic data from the Moon indicate a positive correlation between water and Th. Modeling of lunar magma ocean crystallization predicts a similar chemical differentiation with the highest levels of water in the K- and Th-rich melt residuum of the magma ocean (i.e. urKREEP). Until now, the only sample-based estimates of water content of KREEP-rich magmas come from measurements of OH, F, and Cl in lunar apatites, which suggest a water concentration of < 1 ppm in urKREEP. Using these data, predict that the bulk water content of the magma ocean would have <10 ppm. In contrast, estimate water contents of 320 ppm for the bulk Moon and 1.4 wt % for urKREEP from plagioclase in ferroan anorthosites. Results and interpretation: NanoSIMS data from granitic clasts from Apollo sample 15405,78 show that alkali feldspar, a common mineral in K-enriched rocks, can have approx. 20 ppm of water, which implies magmatic water contents of approx. 1 wt % in the high-silica magmas. This estimate is 2 to 3 orders of magnitude higher than that estimated from apatite in similar rocks. However, the Cl and F contents of apatite in chemically similar rocks suggest that these melts also had high Cl/F ratios, which leads to spuriously low water estimates from the apatite. We can only estimate the minimum water content of urKREEP (+ bulk Moon) from our alkali feldspar data because of the unknown amount of degassing that led to the formation of the granites. Assuming a reasonable 10 to 100 times enrichment of water from urKREEP into the granites produces an estimate of 100-1000 ppm of water for the urKREEP reservoir. Using the modeling of and the 100-1000 ppm of water in urKREEP suggests a minimum bulk silicate Moon water content between 2 and 20 ppm. However, hydrogen loss was likely very significant in the evolution of the lunar mantle. Conclusions: Lunar granites crystallized between 4.3-3.8 Ga from relatively wet melts that degassed upon crystallization. The formation of these granites likely removed significant amounts of water from some mantle source regions, e.g. later mare basalts predicting derivation from a mantle with <10 ppm water. However, this would have been a heterogeneous pro-cess based on K distribution. Thus some, if not most of the mantle may not have been devolatilized by this process; as seen by water in volcanic glasses and melt inclusions.
Shape transition with temperature of the pear-shaped nuclei in covariant density functional theory
Zhang, Wei; Niu, Yi-Fei
2017-11-10
The shape evolutions of the pear-shaped nucleimore » $$^{224}$$Ra and even-even $$^{144-154}$$Ba with temperature are investigated by the finite-temperature relativistic mean field theory with the treatment of pairing correlations by the BCS approach. We study the free energy surfaces as well as the bulk properties including deformations, pairing gaps, excitation energy, and specific heat for the global minimum. For $$^{224}$$Ra, three discontinuities found in the specific heat curve indicate the pairing transition at temperature 0.4 MeV, and two shape transitions at temperatures 0.9 and 1.0 MeV, namely one from quadrupole-octupole deformed to quadrupole deformed, and the other from quadrupole deformed to spherical. Furthermore, the gaps at $N$=136 and $Z$=88 are responsible for stabilizing the octupole-deformed global minimum at low temperatures. Similar pairing transition at $$T\\sim$$0.5 MeV and shape transitions at $T$=0.5-2.2 MeV are found for even-even $$^{144-154}$$Ba. Finally, the transition temperatures are roughly proportional to the corresponding deformations at the ground states.« less
Elasticity, slowness, thermal conductivity and the anisotropies in the Mn3Cu1-xGexN compounds
NASA Astrophysics Data System (ADS)
Li, Guan-Nan; Chen, Zhi-Qian; Lu, Yu-Ming; Hu, Meng; Jiao, Li-Na; Zhao, Hao-Ting
2018-03-01
We perform the first-principles to systematically investigate the elastic properties, minimum thermal conductivity and anisotropy of the negative thermal expansion compounds Mn3Cu1-xGexN. The elastic constant, bulk modulus, shear modulus, Young’s modulus and Poisson ratio are calculated for all the compounds. The results of the elastic constant indicate that all the compounds are mechanically stable and the doped Ge can adjust the ductile character of the compounds. According to the values of the percent ratio of the elastic anisotropy AB, AE and AG, shear anisotropic factors A1, A2 and A3, all the Mn3Cu1-xGexN compounds are elastic anisotropy. The three-dimensional diagrams of elastic moduli in space also show that all the compounds are elastic anisotropy. In addition, the acoustic wave speed, slowness, minimum thermal conductivity and Debye temperature are also calculated. When the ratio of content for Cu and Ge arrived to 1:1, the compound has the lowest thermal conductivity and the highest Debye temperature.
Ion Exchange Method - Diffusion Barrier Investigations
NASA Astrophysics Data System (ADS)
Pielak, G.; Szustakowski, M.; Kiezun, A.
1990-01-01
Ion exchange method is used to GRIN-rod lenses manufacturing. In this process the ion exchange occurs between bulk glass (rod) and a molten salt. It was find that diffusion barrier exists on a border of glass surface and molten salt. The investigations of this barrier show that it value varies with ion exchange time and process temperature. It was find that in the case when thalium glass rod was treated in KNO3, bath, the minimum of the potential after 24 h was in temperature of 407°C, after 48 h in 422°C, after 72 h in 438°C and so on. So there are the possibility to keep the minimum of diffusion barrier by changing the temperature of the process and then the effectiveness of ion exchange process is the most effective. The time needed to obtain suitable refractive index distribution in a process when temperature was linearly changed from 400°C to 460°C was shorter of about 30% compare with the process in which temperature was constant and equal 450°C.
Cryogenic sapphire oscillator using a low-vibration design pulse-tube cryocooler: first results.
Hartnett, John; Nand, Nitin; Wang, Chao; Floch, Jean-Michel
2010-05-01
A cryogenic sapphire oscillator (CSO) has been implemented at 11.2 GHz using a low-vibration design pulsetube cryocooler. Compared with a state-of-the-art liquid helium cooled CSO in the same laboratory, the square root Allan variance of their combined fractional frequency instability is sigma(y) = 1.4 x 10(-15)tau(-1/2) for integration times 1 < tau < 10 s, dominated by white frequency noise. The minimum sigmay = 5.3 x 10(-16) for the two oscillators was reached at tau = 20 s. Assuming equal contributions from both CSOs, the single oscillator phase noise S(phi) approximately -96 dB x rad(2)/Hz at 1 Hz set from the carrier.
Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45
NASA Astrophysics Data System (ADS)
Chin, Liem; Chendra, Erwinna; Sukmana, Agus
2018-01-01
To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Quaternion-valued single-phase model for three-phase power system
NASA Astrophysics Data System (ADS)
Gou, Xiaoming; Liu, Zhiwen; Liu, Wei; Xu, Yougen; Wang, Jiabin
2018-03-01
In this work, a quaternion-valued model is proposed in lieu of the Clarke's α, β transformation to convert three-phase quantities to a hypercomplex single-phase signal. The concatenated signal can be used for harmonic distortion detection in three-phase power systems. In particular, the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain, while the Clarke's transformation-based methods will fail to detect the zero sequence voltages. Based on the quaternion-valued model, the Fourier transform, the minimum variance distortionless response (MVDR) algorithm and the multiple signal classification (MUSIC) algorithm are presented as examples to detect harmonic distortion. Simulations are provided to demonstrate the potentials of this new modeling method.
Object aggregation using Neyman-Pearson analysis
NASA Astrophysics Data System (ADS)
Bai, Li; Hinman, Michael L.
2003-04-01
This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.
de Garnica, M L; Rosales, R S; Gonzalo, C; Santos, J A; Nicholas, R A J
2013-06-01
To isolate and characterize strains of Mycoplasma agalactiae from bulk tank and silo ewes' milk. Thirteen mycoplasma isolates were obtained from samples of sheep milk taken from bulk tank and large silos and identified as Myc. agalactiae by PCR-DGGE. The isolates were typed by pulsed field gel electrophoresis (PFGE), SDS-PAGE and immunoblot. The in vitro activity of 13 antimicrobials of veterinary interest was tested against these isolates. Results showed that the most effective compounds against Myc. agalactiae in vitro were clindamycin, an antibiotic not previously described as a suitable contagious agalactia (CA) treatment, with Minimum Inhibitory Concentration (MIC) values of <0·12 μg ml(-1) , and quinolones, with MIC values <0·12-0·5 μg ml(-1) , which are used as standard treatments against CA. Based on the in vitro assay, clindamycin, quinolones, tylosin and tilmicosin would be appropriate antimicrobials for CA treatment. The isolates were mostly resistant to erythromycin, indicating that it would not be a suitable choice for therapy. The isolates showed common molecular and protein profiles by PFGE and SDS-PAGE, with minor differences observed by immunoblot analysis, suggesting a clonal relationship among them. This study demonstrated the importance of the appropriate selection of antimicrobials for treatment of CA. © [2013] Crown copyright. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Oxygen Isotope Ratios of Magnetite in CI-Like Clasts from a Polymict Ureilite
NASA Technical Reports Server (NTRS)
Kita, N. T.; Defouilloy, C.; Goodrich, C. A.; Zolensky, M. E.
2017-01-01
Polymict ureilites contain a variety of Less than or equal to mm to cm sized non-ureilitic clasts, many of which can be identifed as chondritic and achondritic meteorite types. Among them, dark clasts have been observed in polymict ureilites that are similar to CI chondrites in mineralogy, containing phyllosilicates, magnetite, sulfide and carbonates. Bulk oxygen isotope analyses of a dark clast in Nilpena plot along the CCAM line and above the terrestrial fractionation line, on the O-poor extension of the main group ureilite trend and clearly different from bulk CI chondrites. One possible origins of such dark clast is that they represent aqueously altered precursors of ureilite parent body (UPB) that were preserved on the cold surface of the UPB. Oxygen isotope analyses of dark clasts are key to better understanding their origins. Oxygen isotope ratios of magnetite are of special interest because they reflect the compositions of the fluids in asteroidal bodies. In primitive chondrites, Delta O (= Delta O - 0.52× Delta O) values of magnetites are always higher than those of the bulk meteorites and represent minimum Delta O values of the initial O-poor aqueous fluids in the parent body. Previous SIMS analyses on magnetite and fayalite in dark clasts from the DaG 319 polymict ureilite were analytically difficult due to small grain sizes, though data indicated positive Delta O values of 3-4 per mille, higher than that of the dark clast in Nilpena (1.49per mille).
Meaningless comparisons lead to false optimism in medical machine learning
Kording, Konrad; Recht, Benjamin
2017-01-01
A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood—the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring. PMID:28949964
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
Refining a case-mix measure for nursing homes: Resource Utilization Groups (RUG-III).
Fries, B E; Schneider, D P; Foley, W J; Gavazzi, M; Burke, R; Cornelius, E
1994-07-01
A case-mix classification system for nursing home residents is developed, based on a sample of 7,658 residents in seven states. Data included a broad assessment of resident characteristics, corresponding to items of the Minimum Data Set, and detailed measurement of nursing staff care time over a 24-hour period and therapy staff time over a 1-week period. The Resource Utilization Groups, Version III (RUG-III) system, with 44 distinct groups, achieves 55.5% variance explanation of total (nursing and therapy) per diem cost and meets goals of clinical validity and payment incentives. The mean resource use (case-mix index) of groups spans a nine-fold range. The RUG-III system improves on an earlier version not only by increasing the variance explanation (from 43%), but, more importantly, by identifying residents with "high tech" procedures (e.g., ventilators, respirators, and parenteral feeding) and those with cognitive impairments; by using better multiple activities of daily living; and by providing explicit qualifications for the Medicare nursing home benefit. RUG-III is being implemented for nursing home payment in 11 states (six as part of a federal multistate demonstration) and can be used in management, staffing level determination, and quality assurance.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Bulk density of small meteoroids
NASA Astrophysics Data System (ADS)
Kikwaya, J.-B.; Campbell-Brown, M.; Brown, P. G.
2011-06-01
Aims: Here we report on precise metric and photometric observations of 107 optical meteors, which were simultaneously recorded at multiple stations using three different intensified video camera systems. The purpose is to estimate bulk meteoroid density, link small meteoroids to their parent bodies based on dynamical and physical density values expected for different small body populations, to better understand and explain the dynamical evolution of meteoroids after release from their parent bodies. Methods: The video systems used had image sizes ranging from 640 × 480 to 1360 × 1036 pixels, with pixel scales from 0.01° per pixel to 0.05° per pixel, and limiting meteor magnitudes ranging from Mv = +2.5 to +6.0. We find that 78% of our sample show noticeable deceleration, allowing more robust constraints to be placed on density estimates. The density of each meteoroid is estimated by simultaneously fitting the observed deceleration and lightcurve using a model based on thermal fragmentation, conservation of energy and momentum. The entire phase space of the model free parameters is explored for each event to find ranges of parameters which fit the observations within the measurement uncertainty. Results: (a) We have analysed our data by first associating each of our events with one of the five meteoroid classes. The average density of meteoroids whose orbits are asteroidal and chondritic (AC) is 4200 kg m-3 suggesting an asteroidal parentage, possibly related to the high-iron content population. Meteoroids with orbits belonging to Jupiter family comets (JFCs) have an average density of 3100 ± 300 kg m-3. This high density is found for all meteoroids with JFC-like orbits and supports the notion that the refractory material reported from the Stardust measurements of 81P/Wild 2 dust is common among the broader JFC population. This high density is also the average bulk density for the 4 meteoroids with orbits belonging to the Ecliptic shower-type class (ES) also related to JFCs. Both categories we suggest are chondritic based on their high bulk density. Meteoroids of HT (Halley type) orbits have a minimum bulk density value of 360+400-100 kg m-3 and a maximum value of 1510+400-900 kg m-3. This is consistent with many previous works which suggest bulk cometary meteoroid density is low. SA (Sun-approaching)-type meteoroids show a density spread from 1000 kg m-3 to 4000 kg m-3, reflecting multiple origins. (b) We found two different meteor showers in our sample: Perseids (10 meteoroids, ~11% of our sample) with an average bulk density of 620 kg m-3 and Northern Iota Aquariids (4 meteoroids) with an average bulk density of 3200 kg m-3, consistent with the notion that the NIA derive from 2P/Encke.
The Principle of Energetic Consistency
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.
Meta-omic signatures of microbial metal and nitrogen cycling in marine oxygen minimum zones
Glass, Jennifer B.; Kretz, Cecilia B.; Ganesh, Sangita; Ranjan, Piyush; Seston, Sherry L.; Buck, Kristen N.; Landing, William M.; Morton, Peter L.; Moffett, James W.; Giovannoni, Stephen J.; Vergin, Kevin L.; Stewart, Frank J.
2015-01-01
Iron (Fe) and copper (Cu) are essential cofactors for microbial metalloenzymes, but little is known about the metalloenyzme inventory of anaerobic marine microbial communities despite their importance to the nitrogen cycle. We compared dissolved O2, NO3−, NO2−, Fe and Cu concentrations with nucleic acid sequences encoding Fe and Cu-binding proteins in 21 metagenomes and 9 metatranscriptomes from Eastern Tropical North and South Pacific oxygen minimum zones and 7 metagenomes from the Bermuda Atlantic Time-series Station. Dissolved Fe concentrations increased sharply at upper oxic-anoxic transition zones, with the highest Fe:Cu molar ratio (1.8) occurring at the anoxic core of the Eastern Tropical North Pacific oxygen minimum zone and matching the predicted maximum ratio based on data from diverse ocean sites. The relative abundance of genes encoding Fe-binding proteins was negatively correlated with O2, driven by significant increases in genes encoding Fe-proteins involved in dissimilatory nitrogen metabolisms under anoxia. Transcripts encoding cytochrome c oxidase, the Fe- and Cu-containing terminal reductase in aerobic respiration, were positively correlated with O2 content. A comparison of the taxonomy of genes encoding Fe- and Cu-binding vs. bulk proteins in OMZs revealed that Planctomycetes represented a higher percentage of Fe genes while Thaumarchaeota represented a higher percentage of Cu genes, particularly at oxyclines. These results are broadly consistent with higher relative abundance of genes encoding Fe-proteins in the genome of a marine planctomycete vs. higher relative abundance of genes encoding Cu-proteins in the genome of a marine thaumarchaeote. These findings highlight the importance of metalloenzymes for microbial processes in oxygen minimum zones and suggest preferential Cu use in oxic habitats with Cu > Fe vs. preferential Fe use in anoxic niches with Fe > Cu. PMID:26441925
Meta-omic signatures of microbial metal and nitrogen cycling in marine oxygen minimum zones.
Glass, Jennifer B; Kretz, Cecilia B; Ganesh, Sangita; Ranjan, Piyush; Seston, Sherry L; Buck, Kristen N; Landing, William M; Morton, Peter L; Moffett, James W; Giovannoni, Stephen J; Vergin, Kevin L; Stewart, Frank J
2015-01-01
Iron (Fe) and copper (Cu) are essential cofactors for microbial metalloenzymes, but little is known about the metalloenyzme inventory of anaerobic marine microbial communities despite their importance to the nitrogen cycle. We compared dissolved O2, NO[Formula: see text], NO[Formula: see text], Fe and Cu concentrations with nucleic acid sequences encoding Fe and Cu-binding proteins in 21 metagenomes and 9 metatranscriptomes from Eastern Tropical North and South Pacific oxygen minimum zones and 7 metagenomes from the Bermuda Atlantic Time-series Station. Dissolved Fe concentrations increased sharply at upper oxic-anoxic transition zones, with the highest Fe:Cu molar ratio (1.8) occurring at the anoxic core of the Eastern Tropical North Pacific oxygen minimum zone and matching the predicted maximum ratio based on data from diverse ocean sites. The relative abundance of genes encoding Fe-binding proteins was negatively correlated with O2, driven by significant increases in genes encoding Fe-proteins involved in dissimilatory nitrogen metabolisms under anoxia. Transcripts encoding cytochrome c oxidase, the Fe- and Cu-containing terminal reductase in aerobic respiration, were positively correlated with O2 content. A comparison of the taxonomy of genes encoding Fe- and Cu-binding vs. bulk proteins in OMZs revealed that Planctomycetes represented a higher percentage of Fe genes while Thaumarchaeota represented a higher percentage of Cu genes, particularly at oxyclines. These results are broadly consistent with higher relative abundance of genes encoding Fe-proteins in the genome of a marine planctomycete vs. higher relative abundance of genes encoding Cu-proteins in the genome of a marine thaumarchaeote. These findings highlight the importance of metalloenzymes for microbial processes in oxygen minimum zones and suggest preferential Cu use in oxic habitats with Cu > Fe vs. preferential Fe use in anoxic niches with Fe > Cu.
Fine-scale variability of isopycnal salinity in the California Current System
NASA Astrophysics Data System (ADS)
Itoh, Sachihiko; Rudnick, Daniel L.
2017-09-01
This paper examines the fine-scale structure and seasonal fluctuations of the isopycnal salinity of the California Current System from 2007 to 2013 using temperature and salinity profiles obtained from a series of underwater glider surveys. The seasonal mean distributions of the spectral power of the isopycnal salinity gradient averaged over submesoscale (12-30 km) and mesoscale (30-60 km) ranges along three survey lines off Monterey Bay, Point Conception, and Dana Point were obtained from 298 transects. The mesoscale and submesoscale variance increased as coastal upwelling caused the isopycnal salinity gradient to steepen. Areas of elevated variance were clearly observed around the salinity front during the summer then spread offshore through the fall and winter. The high fine-scale variances were observed typically above 25.8 kg m-3 and decreased with depth to a minimum at around 26.3 kg m-3. The mean spectral slope of the isopycnal salinity gradient with respect to wavenumber was 0.19 ± 0.27 over the horizontal scale of 12-60 km, and 31%-35% of the spectra had significantly positive slopes. In contrast, the spectral slope over 12-30 km was mostly flat, with mean values of -0.025 ± 0.32. An increase in submesoscale variability accompanying the steepening of the spectral slope was often observed in inshore areas; e.g., off Monterey Bay in winter, where a sharp front developed between the California Current and the California Under Current, and the lower layers of the Southern California Bight, where vigorous interaction between a synoptic current and bottom topography is to be expected.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Security practices and regulatory compliance in the healthcare industry.
Kwon, Juhee; Johnson, M Eric
2013-01-01
Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Hospitals in the highest level of compliance were significantly managing third parties' breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption.
Security practices and regulatory compliance in the healthcare industry
Kwon, Juhee; Johnson, M Eric
2013-01-01
Objective Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. Design We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. Measurement We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Results Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Conclusions Hospitals in the highest level of compliance were significantly managing third parties’ breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption. PMID:22955497
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
NASA Astrophysics Data System (ADS)
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.
2017-09-01
This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.
Adaptive color halftoning for minimum perceived error using the blue noise mask
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1997-04-01
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level–based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level–based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls. PMID:29442533
Brühl, Albert; Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level-based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level-based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls.
NASA Astrophysics Data System (ADS)
He, Minhui; Yang, Bao; Datsenko, Nina M.
2014-08-01
The recent unprecedented warming found in different regions has aroused much attention in the past years. How temperature has really changed on the Tibetan Plateau (TP) remains unknown since very limited high-resolution temperature series can be found over this region, where large areas of snow and ice exist. Herein, we develop two Juniperus tibetica Kom. tree-ring width chronologies from different elevations. We found that the two tree-ring series only share high-frequency variability. Correlation, response function and partial correlation analysis indicate that prior year annual (January-December) minimum temperature is most responsible for the higher belt juniper radial growth, while more or less precipitation signal is contained by the tree-ring width chronology at the lower belt and is thus excluded from further analysis. The tree growth-climate model accounted for 40 % of the total variance in actual temperature during the common period 1957-2010. The detected temperature signal is further robustly verified by other results. Consequently, a six century long annual minimum temperature history was firstly recovered for the Yushu region, central TP. Interestingly, the rapid warming trend during the past five decades is identified as a significant cold phase in the context of the past 600 years. The recovered temperature series reflects low-frequency variability consistent with other temperature reconstructions over the whole TP region. Furthermore, the present recovered temperature series is associated with the Asian monsoon strength on decadal to multidecadal scales over the past 600 years.
Zommick, Daniel H; Knowles, Lisa O; Pavek, Mark J; Knowles, N Richard
2014-06-01
The effects of soil temperature during tuber development on physiological processes affecting retention of postharvest quality in low-temperature sweetening (LTS) resistant and susceptible potato cultivars were investigated. 'Premier Russet' (LTS resistant), AO02183-2 (LTS resistant) and 'Ranger Russet' (LTS susceptible) tubers were grown at 16 (ambient), 23 and 29 °C during bulking (111-164 DAP) and maturation (151-180 DAP). Bulking at 29 °C virtually eliminated yield despite vigorous vine growth. Tuber specific gravity decreased as soil temperature increased during bulking, but was not affected by temperature during maturation. Bulking at 23 °C and maturation at 29 °C induced higher reducing sugar levels in the proximal (basal) ends of tubers, resulting in non-uniform fry color at harvest, and abolished the LTS-resistant phenotype of 'Premier Russet' tubers. AO02183-2 tubers were more tolerant of heat for retention of LTS resistance. Higher bulking and maturation temperatures also accelerated LTS and loss of process quality of 'Ranger Russet' tubers, consistent with increased invertase and lower invertase inhibitor activities. During LTS, tuber respiration fell rapidly to a minimum as temperature decreased from 9 to 4 °C, followed by an increase to a maximum as tubers acclimated to 4 °C; respiration then declined over the remaining storage period. The magnitude of this cold-induced acclimation response correlated directly with the extent of buildup in sugars over the 24-day LTS period and thus reflected the effects of in-season heat stress on propensity of tubers to sweeten and lose process quality at 4 °C. While morphologically indistinguishable from control tubers, tubers grown at elevated temperature had different basal metabolic (respiration) rates at harvest and during cold acclimation, reduced dormancy during storage, greater increases in sucrose and reducing sugars and associated loss of process quality during LTS, and reduced ability to improve process quality through reconditioning. Breeding for retention of postharvest quality and LTS resistance should consider strategies for incorporating more robust tolerance to in-season heat stress.
NASA Astrophysics Data System (ADS)
Kohán, Balázs; Tyler, Jonathan; Jones, Matthew; Kern, Zoltán
2017-04-01
Water stable isotopes are important natural tracers in the hydrological cycle on global, regional and local scales. Daily precipitation water samples were collected from 70 sites over the British Isles on the 23rd, 24th, and 25th January, 2012 [1]. Samples were collected as part of a pilot study for the British Isotopes in Rainfall Project, a community engagement initiative, in collaboration with volunteer weather observers and the UK Met Office. Spatial correlation structure of daily precipitation stable oxygen isotope composition (δ18OP) has been explored by variogram analysis [2]. Since the variograms from the raw data suggested a pronounced trend, owing to the spatial trend discussed in the original study [1], a second order polynomial trend was removed from the raw δ18OP data and variograms were calculated on the residuals. Directional experimental semivariograms were calculated (steps: 10°, tolerance: 30°) and aggregated into variogram surface plots to explore the spatial dependence structure of daily δ18OP. Each daily data set produced distinct variogram plots. -A well expressed anisotropic structure can be seen for Jan 23. The lowest and highest variance was observed in the SW-NE and NNE-SSW direction, respectively. Meteorological observations showed that the majority of the atmospheric flow was SW on this day, so the direction of low variance seems to reflect this flow direction, while the maximum variance might reflect the moisture variance near the elongation of the frontal system. -A less characteristic but still expressed anisotropic structure was found for Jan 24 when a warm front passed the British Isles perpendicular to the east coast, leading to a characteristic east-west δ18OP gradient suggestive of progressive rainout. The low variance central zone has a 100 km radius which might correspond well to the width of the warm front zone. Although, the axis of minimum variance was similarly SW-NE, the zone of maximum variance was broader and practically perpendicular to it. In this case, however, directions of the axes appear misaligned with the flow direction. -We could not observe similar characteristic patterns in the last variogram calculated from the Jan 25 data set. These preliminary results suggest that variogram analysis is a promising approach to link δ18OP patterns to atmospheric processes. NKFIH: SNN118205/ARRS: N1-0054 References 1.Tyler, J. J., Jones, M., Arrowsmith, C., Allott, T., & Leng, M. J. (2016). Spatial patterns in the oxygen isotope composition of daily rainfall in the British Isles. Climate Dynamics 47:1971-1987 2.Webster, R. Oliver M.A. (2007) Geostatistics for Environmental Scientists. John Wiley & Sons, Chichester
Morelli, J.J.; Hercules, D.M.; Lyons, P.C.; Palmer, C.A.; Fletcher, J.D.
1988-01-01
The variation in relative elemental concentrations among a series of coal macerals belonging to the vitrinite maceral group was determined using laser micro mass spectrometry (LAMMS). Variations in Ba, Cr, Ga, Sr, Ti, and V concentrations among the coals were determined using the LAMM A-1000 instrument. LAMMS analysis is not limited to these elements; their selection illustrates the application of the technique. Ba, Cr, Ga, Sr, Ti, and V have minimal site-to-site variance in the vitrinite macerals of the studied coals as measured by LAMMS. The LAMMS data were compared with bulk elemental data obtained by instrumental neutron activation analysis (INAA) and D. C. arc optical emission spectroscopy (DCAS) in order to determine the reliability of the LAMMS data. The complex nature of the ionization phenomena in LAMMS and the lack of standards characterized on a microscale makes obtaining quantitative elemental data within the ionization microvolume difficult; however, we demonstrate that the relative variation of an element among vitrinites from different coal beds in the eastern United States can be observed using LAMMS in a "bulk" mode by accumulating signal intensities over several microareas of each vitrinite. Our studies indicate gross changes (greater than a factor of 2 to 5 depending on the element) can be monitored when the elemental concentration is significantly above the detection limit. "Bulk" mode analysis was conducted to evaluate the accuracy of future elemental LAMMS microanalyses. The primary advantage of LAMMS is the inherent spatial resolution, ~ 20 ??m for coal. Two different vitrite bands in the Lower Bakerstown coal bed (CLB-1) were analyzed. The analysis did not establish any certain concentration differences in Ba, Cr, Ga, Sr, Ti, and V between the two bands. ?? 1988 Springer-Verlag.
Teferi, Ermias; Bewket, Woldeamlak; Simane, Belay
2016-02-01
Understanding changes in soil quality resulting from land use and land management changes is important to design sustainable land management plans or interventions. This study evaluated the influence of land use and land cover (LULC) on key soil quality indicators (SQIs) within a small watershed (Jedeb) in the Blue Nile Basin of Ethiopia. Factor analysis based on principal component analysis (PCA) was used to determine different SQIs. Surface (0-15 cm) soil samples with four replications were collected from five main LULC types in the watershed (i.e., natural woody vegetation, plantation forest, grassland, cultivated land, and barren land) and at two elevation classes (upland and midland), and 13 soil properties were measured for each replicate. A factorial (2 × 5) multivariate analysis of variance (MANOVA) showed that LULC and altitude together significantly affected organic matter (OM) levels. However, LULC alone significantly affected bulk density and altitude alone significantly affected bulk density, soil acidity, and silt content. Afforestation of barren land with eucalypt trees can significantly increase the soil OM in the midland part but not in the upland part. Soils under grassland had a significantly higher bulk density than did soils under natural woody vegetation indicating that de-vegetation and conversion to grassland could lead to soil compaction. Thus, the historical LULC change in the Jedeb watershed has resulted in the loss of soil OM and increased soil compaction. The study shows that a land use and management system can be monitored if it degrades or maintains or improves the soil using key soil quality indicators.
Lee, Kyung-Min; Armstrong, Paul R; Thomasson, J Alex; Sui, Ruixiu; Casada, Mark; Herrman, Timothy J
2010-10-27
Tracing grain from the farm to its final processing destination as it moves through multiple grain-handling systems, storage bins, and bulk carriers presents numerous challenges to existing record-keeping systems. This study examines the suitability of coded caplets to trace grain, in particular, to evaluate methodology to test tracers' ability to withstand the rigors of a commercial grain handling and storage systems as defined by physical properties using measurement technology commonly applied to assess grain hardness and end-use properties. Three types of tracers to dispense into bulk grains for tracing the grain back to its field of origin were developed using three food-grade substances [processed sugar, pregelatinized starch, and silicified microcrystalline cellulose (SMCC)] as a major component in formulations. Due to a different functionality of formulations, the manufacturing process conditions varied for each tracer type, resulting in unique variations in surface roughness, weight, dimensions, and physical and spectroscopic properties before and after coating. The applied two types of coating [pregelatinized starch and hydroxypropylmethylcellulose (HPMC)] using an aqueous coating system containing appropriate plasticizers showed uniform coverage and clear coating. Coating appeared to act as a barrier against moisture penetration, to protect against mechanical damage of the surface of the tracers, and to improve the mechanical strength of tracers. The results of analysis of variance (ANOVA) tests showed the type of tracer, coating material, conditioning time, and a theoretical weight gain significantly influenced the morphological and physical properties of tracers. Optimization of these factors needs to be pursued to produce desirable tracers with consistent quality and performance when they flow with bulk grains throughout the grain marketing channels.
NASA Astrophysics Data System (ADS)
Wang, Xiaoxia; Xu, Wenteng; Liu, Yang; Wang, Lei; Sun, Hejun; Wang, Lei; Chen, Songlin
2016-11-01
In recent years, Edwardsiella tarda has become one of the most deadly pathogens of Japanese flounder ( Paralichthys olivaceus), causing serious annual losses in commercial production. In contrast to the rapid advances in the aquaculture of P. olivaceus, the study of E. tarda resistance-related markers has lagged behind, hindering the development of a disease-resistant strain. Thus, a marker-trait association analysis was initiated, combining bulked segregant analysis (BSA) and quantitative trait loci (QTL) mapping. Based on 180 microsatellite loci across all chromosomes, 106 individuals from the F1333 (♀: F0768 ×♂: F0915) (Nomenclature rule: F+year+family number) were used to detect simple sequence repeats (SSRs) and QTLs associated with E. tarda resistance. After a genomic scan, three markers (Scaffold 404-21589, Scaffold 404-21594 and Scaffold 270-13812) from the same linkage group (LG)-1 exhibited a significant difference between DNA, pooled/bulked from the resistant and susceptible groups (P <0.001). Therefore, 106 individuals were genotyped using all the SSR markers in LG1 by single marker analysis. Two different analytical models were then employed to detect SSR markers with different levels of significance in LG1, where 17 and 18 SSR markers were identified, respectively. Each model found three resistance-related QTLs by composite interval mapping (CIM). These six QTLs, designated qE1-6, explained 16.0%-89.5% of the phenotypic variance. Two of the QTLs, qE-2 and qE-4, were located at the 66.7 cM region, which was considered a major candidate region for E. tarda resistance. This study will provide valuable data for further investigations of E. tarda resistance genes and facilitate the selective breeding of disease-resistant Japanese flounder in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaya Shankar Tumuluru
2014-03-01
A flat die pellet mill was used to understand the effect of high levels of feedstock moisture content in the range of 28–38% (w.b.), with die rotational speeds of 40–60 Hz, and preheating temperatures of 30–110 °C on the pelleting characteristics of 4.8 mm screen size ground corn stover using an 8 mm pellet die. The physical properties of the pelletised biomass studied are: (a) pellet moisture content, (b) unit, bulk and tapped density, and (c) durability. Pelletisation experiments were conducted based on central composite design. Analysis of variance (ANOVA) indicated that feedstock moisture content influenced all of the physicalmore » properties at P < 0.001. Pellet moisture content decreased with increase in preheating temperature to about 110 °C and decreasing the feedstock moisture content to about 28% (w.b.). Response surface models developed for quality attributes with respect to process variables has adequately described the process with coefficient of determination (R2) values of >0.88. The other pellet quality attributes such as unit, bulk, tapped density, were maximised at feedstock moisture content of 30–33% (w.b.), die speeds of >50 Hz and preheating temperature of >90 °C. In case of durability a medium moisture content of 33–34% (w.b.) and preheating temperatures of >70 °C and higher die speeds >50 Hz resulted in high durable pellets. It can be concluded from the present study that feedstock moisture content, followed by preheating, and die rotational speed are the interacting process variables influencing pellet moisture content, unit, bulk and tapped density and durability.« less
Metzger, Fabian; Mischek, Daniel; Stoffers, Frédéric
2017-01-01
Here we show that the hydrodynamic radii-dependent entry of blood proteins into cerebrospinal fluid (CSF) can best be modeled with a diffusional system of consecutive interdependent steady states between barrier-restricted molecular flux and bulk flow of CSF. The connected steady state model fits precisely to experimental results and provides the theoretical backbone to calculate the in-vivo hydrodynamic radii of blood-derived proteins as well as individual barrier characteristics. As the experimental reference set we used a previously published large-scale patient cohort of CSF to serum quotient ratios of immunoglobulins in relation to the respective albumin quotients. We related the inter-individual variances of these quotient relationships to the individual CSF flow time and barrier characteristics. We claim that this new concept allows the diagnosis of inflammatory processes with Reibergrams derived from population-based thresholds to be shifted to individualized judgment, thereby improving diagnostic sensitivity. We further use the source-dependent gradient patterns of proteins in CSF as intrinsic tracers for CSF flow characteristics. We assume that the rostrocaudal gradient of blood-derived proteins is a consequence of CSF bulk flow, whereas the slope of the gradient is a consequence of the unidirectional bulk flow and bidirectional pulsatile flow of CSF. Unlike blood-derived proteins, the influence of CSF flow characteristics on brain-derived proteins in CSF has been insufficiently discussed to date. By critically reviewing existing experimental data and by reassessing their conformity to CSF flow assumptions we conclude that the biomarker potential of brain-derived proteins in CSF can be improved by considering individual subproteomic dynamics of the CSF system.
Taipale, Sami J.; Peltomaa, Elina; Hiltunen, Minna; Jones, Roger I.; Hahn, Martin W.; Biasi, Christina; Brett, Michael T.
2015-01-01
Stable isotope mixing models in aquatic ecology require δ13C values for food web end members such as phytoplankton and bacteria, however it is rarely possible to measure these directly. Hence there is a critical need for improved methods for estimating the δ13C ratios of phytoplankton, bacteria and terrestrial detritus from within mixed seston. We determined the δ13C values of lipids, phospholipids and biomarker fatty acids and used these to calculate isotopic differences compared to the whole-cell δ13C values for eight phytoplankton classes, five bacterial taxa, and three types of terrestrial organic matter (two trees and one grass). The lipid content was higher amongst the phytoplankton (9.5±4.0%) than bacteria (7.3±0.8%) or terrestrial matter (3.9±1.7%). Our measurements revealed that the δ13C values of lipids followed phylogenetic classification among phytoplankton (78.2% of variance was explained by class), bacteria and terrestrial matter, and there was a strong correlation between the δ13C values of total lipids, phospholipids and individual fatty acids. Amongst the phytoplankton, the isotopic difference between biomarker fatty acids and bulk biomass averaged -10.7±1.1‰ for Chlorophyceae and Cyanophyceae, and -6.1±1.7‰ for Cryptophyceae, Chrysophyceae and Diatomophyceae. For heterotrophic bacteria and for type I and type II methane-oxidizing bacteria our results showed a -1.3±1.3‰, -8.0±4.4‰, and -3.4±1.4‰ δ13C difference, respectively, between biomarker fatty acids and bulk biomass. For terrestrial matter the isotopic difference averaged -6.6±1.2‰. Based on these results, the δ13C values of total lipids and biomarker fatty acids can be used to determine the δ13C values of bulk phytoplankton, bacteria or terrestrial matter with ± 1.4‰ uncertainty (i.e., the pooled SD of the isotopic difference for all samples). We conclude that when compound-specific stable isotope analyses become more widely available, the determination of δ13C values for selected biomarker fatty acids coupled with established isotopic differences, offers a promising way to determine taxa-specific bulk δ13C values for the phytoplankton, bacteria, and terrestrial detritus embedded within mixed seston. PMID:26208114
El Gezawi, M; Kaisarly, D; Al-Saleh, H; ArRejaie, A; Al-Harbi, F; Kunzelmann, K H
This study investigated the color stability and microhardness of five composites exposed to four beverages with different pH values. Composite discs were produced (n=10); Filtek Z250 (3M ESPE) and Filtek P90 (3M ESPE) were applied in two layers (2 mm, 20 seconds), and Tetric N-Ceram Bulk Fill (TetricBF, Ivoclar Vivadent) and SonicFill (Kerr) were applied in bulk (4 mm) and then light cured (40 seconds, Ortholux-LED, 1600 mW/cm 2 ). Indirect composite Sinfony (3M ESPE) was applied in two layers (2 mm) and cured (Visio system, 3M ESPE). The specimens were polished and tested for color stability; ΔE was calculated using spectrophotometer readings. Vickers microhardness (50 g, dwell time=45 seconds) was assessed on the top and bottom surfaces at baseline, 40 days of storage, subsequent repolishing, and 60 days of immersion in distilled water (pH=7.0), Coca-Cola (pH=2.3), orange juice (pH=3.75), or anise (pH=8.5) using scanning electron microscopy (SEM). The materials had similar ΔE values (40 days, p>0.05), but TetricBF had a significantly greater ΔE than P90 or SF (40 days). The ΔE was less for P90 and TetricBF than for Z250, SonicFill, and Sinfony (60 days). Repolishing and further immersion significantly affected the ΔE (p<0.05) except for P90. All composites had significantly different top vs bottom baseline microhardnesses. This was insignificant for the Z250/water, P90/orange juice (40 days), and Sinfony groups (40 and 60 days). Immersion produced variable time-dependent deterioration of microhardness in all groups. Multivariate repeated measures analysis of variance with post hoc Bonferroni tests were used to compare the results. ΔE and microhardness changes were significantly inversely correlated at 40 days, but this relationship was insignificant at 60 days (Pearson test). SEM showed degradation (40 days) that worsened (60 days). Bulk-fill composites differ regarding color-stability and top-to-bottom microhardness changes compared with those of other composites. P90 showed better surface degradation resistance. In conclusion, bulk-fill composites are not promising alternatives to incremental and indirect composites regarding biodegradation.
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E
2010-07-01
Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.
Small-scale Pressure-balanced Structures Driven by Mirror-mode Waves in the Solar Wind
NASA Astrophysics Data System (ADS)
Yao, Shuo; He, J.-S.; Tu, C.-Y.; Wang, L.-H.; Marsch, E.
2013-10-01
Recently, small-scale pressure-balanced structures (PBSs) have been studied with regard to their dependence on the direction of the local mean magnetic field B0 . The present work continues these studies by investigating the compressive wave mode forming small PBSs, here for B0 quasi-perpendicular to the x-axis of Geocentric Solar Ecliptic coordinates (GSE-x). All the data used were measured by WIND in the quiet solar wind. From the distribution of PBSs on the plane determined by the temporal scale and angle θxB between the GSE-x and B0 , we notice that at θxB = 115° the PBSs appear at temporal scales ranging from 700 s to 60 s. In the corresponding temporal segment, the correlations between the plasma thermal pressure P th and the magnetic pressure P B, as well as that between the proton density N p and the magnetic field strength B, are investigated. In addition, we use the proton velocity distribution functions to calculate the proton temperatures T and T ∥. Minimum Variance Analysis is applied to find the magnetic field minimum variance vector BN . We also study the time variation of the cross-helicity σc and the compressibility C p and compare these with values from numerical predictions for the mirror mode. In this way, we finally identify a short segment that has T > T ∥, proton β ~= 1, both pairs of P th-P B and N p-B showing anti-correlation, and σc ≈ 0 with C p > 0. Although the examination of σc and C p is not conclusive, it provides helpful additional information for the wave mode identification. Additionally, BN is found to be highly oblique to B0 . Thus, this work suggests that a candidate mechanism for forming small-scale PBSs in the quiet solar wind is due to mirror-mode waves.
Li, Jinyang; Gittleson, Forrest S.; Liu, Yanhui; ...
2017-06-30
In order to bypass the limitation of bulk metallic glasses fabrication, we synthesized thin film metallic glasses to study the corrosion characteristics of a wide atomic% composition range, Mg(35.9-63%)Ca(4.1-21%)Zn(17.9-58.3%), in simulated body fluid. We highlight a clear relationship between Zn content and corrosion current such that Zn-medium metallic glasses exhibit minimum corrosion. In addition, we found higher Zn content leads to a poor in vitro cell viability. Finally, these results showcase the benefit of evaluating a larger alloy compositional space to probe the limits of corrosion resistance and prescreen for biocompatible applications.
Requirements for high-efficiency solar cells
NASA Technical Reports Server (NTRS)
Sah, C. T.
1986-01-01
Minimum recombination and low injection level are essential for high efficiency. Twenty percent AM1 efficiency requires a dark recombination current density of 2 x 10 to the minus 13th power A/sq cm and a recombination center density of less than 10 to the 10th power /cu cm. Recombination mechanisms at thirteen locations in a conventional single crystalline silicon cell design are reviewed. Three additional recombination locations are described at grain boundaries in polycrystalline cells. Material perfection and fabrication process optimization requirements for high efficiency are outlined. Innovative device designs to reduce recombination in the bulk and interfaces of single crystalline cells and in the grain boundary of polycrystalline cells are reviewed.
NASA Technical Reports Server (NTRS)
Chembo, Yanne K.; Baumgartel, Lukas; Grudinin, Ivan; Strekalov, Dmitry; Thompson, Robert; Yu, Nan
2012-01-01
Whispering gallery mode resonators are attracting increasing interest as promising frequency reference cavities. Unlike commonly used Fabry-Perot cavities, however, they are filled with a bulk medium whose properties have a significant impact on the stability of its resonance frequencies. In this context that has to be reduced to a minimum. On the other hand, a small monolithic resonator provides opportunity for better stability against vibration and acceleration. this feature is essential when the cavity operates in a non-laboratory environment. In this paper, we report a case study for a crystalline resonator, and discuss the a pathway towards the inhibition of vibration-and acceleration-induced frequency fluctuations.
STS studies of the surface of Bi2Se3
NASA Astrophysics Data System (ADS)
Romanowich, Megan; Lee, Mal-Soon; Mahanti, S. D.; Tessmer, Stuart; Chung, Duck Young; Song, Jung-Hwan; Kanatzidis, Mercouri
2012-02-01
We apply scanning tunneling spectroscopy to characterize the surface of the topological insulator Bi2Se3. Spectroscopy reveals that the minimum in the local density of states (LDOS) does not actually vanish in the region where Dirac cone states exist. We demonstrate with density functional theory calculations that this can be understood in terms of an asymmetric addition to the LDOS associated with a contribution from the bulk valence band that overlaps in energy with the Dirac point. We will discuss the origin of the fluctuations in the LDOS seen in the experiment near 0.2 eV above the Dirac point, which are associated with tunneling into the lowest conduction band states.
A Study of the Southern Ocean: Mean State, Eddy Genesis & Demise, and Energy Pathways
NASA Astrophysics Data System (ADS)
Zajaczkovski, Uriel
The Southern Ocean (SO), due to its deep penetrating jets and eddies, is well-suited for studies that combine surface and sub-surface data. This thesis explores the use of Argo profiles and sea surface height ( SSH) altimeter data from a statistical point of view. A linear regression analysis of SSH and hydrographic data reveals that the altimeter can explain, on average, about 35% of the variance contained in the hydrographic fields and more than 95% if estimated locally. Correlation maxima are found at mid-depth, where dynamics are dominated by geostrophy. Near the surface, diabatic processes are significant, and the variance explained by the altimeter is lower. Since SSH variability is associated with eddies, the regression of SSH with temperature (T) and salinity (S) shows the relative importance of S vs T in controlling density anomalies. The AAIW salinity minimum separates two distinct regions; above the minimum density changes are dominated by T, while below the minimum S dominates over T. The regression analysis provides a method to remove eddy variability, effectively reducing the variance of the hydrographic fields. We use satellite altimetry and output from an assimilating numerical model to show that the SO has two distinct eddy motion regimes. North and south of the Antarctic Circumpolar Current (ACC), eddies propagate westward with a mean meridional drift directed poleward for cyclonic eddies (CEs) and equatorward for anticyclonic eddies (AEs). Eddies formed within the boundaries of the ACC have an effective eastward propagation with respect to the mean deep ACC flow, and the mean meridional drift is reversed, with warm-core AEs propagating poleward and cold-core CEs propagating equatorward. This circulation pattern drives downgradient eddy heat transport, which could potentially transport a significant fraction (24 to 60 x 1013 W) of the net poleward ACC eddy heat flux. We show that the generation of relatively large amplitude eddies is not a ubiquitous feature of the SO but rather a phenomenon that is constrained to five isolated, well-defined "hotspots". These hotspots are located downstream of major topographic features, with their boundaries closely following f/H contours. Eddies generated in these locations show no evidence of a bias in polarity and decay within the boundaries of the generation area. Eddies tend to disperse along f/H contours rather than following lines of latitude. We found enhanced values of both buoyancy (BP) and shear production (SP) inside the hotspots, with BP one order of magnitude larger than SP. This is consistent with baroclinic instability being the main mechanism of eddy generation. The mean potential density field estimated from Argo floats shows that inside the hotspots, isopycnal slopes are steep, indicating availability of potential energy. The hotspots identified in this thesis overlap with previously identified regions of standing meanders. We provide evidence that hotspot locations can be explained by the combined effect of topography, standing meanders that enhance baroclinic instability, and availability of potential energy to generate eddies via baroclinic instabilities.
Multimode Jahn-Teller effect in bulk systems: A case of the N V 0 center in diamond
Zhang, Jianhua; Wang, Cai -Zhuang; Zhu, Zizhong; ...
2018-04-15
Here, the multimode Jahn-Teller (JT) effect in a bulk system of a neutral nitrogen-vacancy (NV 0) center in diamond is investigated via first-principles density-functional-theory calculations and the intrinsic distortion path (IDP) method. The adiabatic potential energy surface of the electronic ground state of the NV 0 center is calculated based on the local spin-density approximation. Our calculations confirm the presence of the dynamic Jahn-Teller effect in the ground 2E state of the NV 0 center. Within the harmonic approximation, the IDP method provides the reactive path of JT distortion from unstable high-symmetry geometry to stable low-symmetry energy minimum geometry, andmore » it describes the active normal modes participating in the distortion. We find that there is more than one vibrational mode contributing to the distortion, and their contributions change along the IDP. Several vibrational modes with large contributions to JT distortion, especially those modes close to 44 meV, are clearly observed as the phonon sideband in photoluminescence spectra in a series of experiments, indicating that the dynamic Jahn-Teller effect plays an important role in the optical transition of the NV 0 center.« less
Multimode Jahn-Teller effect in bulk systems: A case of the N V0 center in diamond
NASA Astrophysics Data System (ADS)
Zhang, Jianhua; Wang, Cai-Zhuang; Zhu, Zizhong; Liu, Qing Huo; Ho, Kai-Ming
2018-04-01
The multimode Jahn-Teller (JT) effect in a bulk system of a neutral nitrogen-vacancy (N V0 ) center in diamond is investigated via first-principles density-functional-theory calculations and the intrinsic distortion path (IDP) method. The adiabatic potential energy surface of the electronic ground state of the N V0 center is calculated based on the local spin-density approximation. Our calculations confirm the presence of the dynamic Jahn-Teller effect in the ground 2E state of the N V0 center. Within the harmonic approximation, the IDP method provides the reactive path of JT distortion from unstable high-symmetry geometry to stable low-symmetry energy minimum geometry, and it describes the active normal modes participating in the distortion. We find that there is more than one vibrational mode contributing to the distortion, and their contributions change along the IDP. Several vibrational modes with large contributions to JT distortion, especially those modes close to 44 meV, are clearly observed as the phonon sideband in photoluminescence spectra in a series of experiments, indicating that the dynamic Jahn-Teller effect plays an important role in the optical transition of the N V0 center.
Production of Two-Dimensional Nanomaterials via Liquid-Based Direct Exfoliation.
Niu, Liyong; Coleman, Jonathan N; Zhang, Hua; Shin, Hyeonsuk; Chhowalla, Manish; Zheng, Zijian
2016-01-20
Tremendous efforts have been devoted to the synthesis and application of two-dimensional (2D) nanomaterials due to their extraordinary and unique properties in electronics, photonics, catalysis, etc., upon exfoliation from their bulk counterparts. One of the greatest challenges that scientists are confronted with is how to produce large quantities of 2D nanomaterials of high quality in a commercially viable way. This review summarizes the state-of-the-art of the production of 2D nanomaterials using liquid-based direct exfoliation (LBE), a very promising and highly scalable wet approach for synthesizing high quality 2D nanomaterials in mild conditions. LBE is a collection of methods that directly exfoliates bulk layered materials into thin flakes of 2D nanomaterials in liquid media without any, or with a minimum degree of, chemical reactions, so as to maintain the high crystallinity of 2D nanomaterials. Different synthetic methods are categorized in the following, in which material characteristics including dispersion concentration, flake thickness, flake size and some applications are discussed in detail. At the end, we provide an overview of the advantages and disadvantages of such synthetic methods of LBE and propose future perspectives. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multimode Jahn-Teller effect in bulk systems: A case of the N V 0 center in diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jianhua; Wang, Cai -Zhuang; Zhu, Zizhong
Here, the multimode Jahn-Teller (JT) effect in a bulk system of a neutral nitrogen-vacancy (NV 0) center in diamond is investigated via first-principles density-functional-theory calculations and the intrinsic distortion path (IDP) method. The adiabatic potential energy surface of the electronic ground state of the NV 0 center is calculated based on the local spin-density approximation. Our calculations confirm the presence of the dynamic Jahn-Teller effect in the ground 2E state of the NV 0 center. Within the harmonic approximation, the IDP method provides the reactive path of JT distortion from unstable high-symmetry geometry to stable low-symmetry energy minimum geometry, andmore » it describes the active normal modes participating in the distortion. We find that there is more than one vibrational mode contributing to the distortion, and their contributions change along the IDP. Several vibrational modes with large contributions to JT distortion, especially those modes close to 44 meV, are clearly observed as the phonon sideband in photoluminescence spectra in a series of experiments, indicating that the dynamic Jahn-Teller effect plays an important role in the optical transition of the NV 0 center.« less
Basic and Morphological Properties of Bukit Goh Bauxite
NASA Astrophysics Data System (ADS)
Hasan, Muzamir; Nor Azmi, Ahmad Amirul Faez Ahmad; Tam, Weng Long; Phang, Biao Yu; Azizul Moqsud, M.
2018-03-01
Investigation conducted by International Maritime Organization (IMO) concluded that the loss of the Bulk Jupiter that carrying bauxite from Kuantan has uncovered evidence to suggest liquefaction led to loss of stability. This research analysed Bukit Goh bauxite and comparison was made with International Maritime Solid Bulk Cargoes (IMSBC Code) standard. To analyse these characteristics of the bauxite, four samples were selected at Bukit Goh, Kuantan ; two of the samples from the Bukit Goh mine and two samples from the stock piles were tested to identify the bauxite basic and morphological properties by referring to GEOSPEC 3 : Model Specification for Soil Testing ; particle size distribution, moisture content and specific gravity and its morphological properties. Laboratory tests involved including Hydrometer test, Small Pycnometer test, Dry Sieve test and Field Emission Scanning Electron Microscop (FESEM) test. The results show that the average moisture content of raw Bukit Goh bauxite is 20.64% which exceeded the recomended value of maximum 10%. Average fine material for raw bauxite is 37.75% which should not be greater than 30% per IMSBC standard. By that, the bauxite from Bukit Goh mine do not achieved the minimum requirements and standards of the IMSBC standard and need to undergo beneficiation process for better quality and safety.
Gauging the Helium Abundance of the Galactic Bulge RR Lyrae Stars
NASA Astrophysics Data System (ADS)
Marconi, Marcella; Minniti, Dante
2018-02-01
We report the first estimate of the He abundance of the population of RR Lyrae stars in the Galactic bulge. This is done by comparing the recent observational data with the latest models. We use the large samples of ab-type RR Lyrae stars found by OGLE IV in the inner bulge and by the VVV survey in the outer bulge. We present the result from the new models computed by Marconi et al., showing that the minimum period for fundamental RR Lyrae pulsators depends on the He content. By comparing these models with the observations in a period versus effective temperature plane, we find that the bulk of the bulge ab-type RR Lyrae are consistent with primordial He abundance Y = 0.245, ruling out a significant He-enriched population. This work demonstrates that the He content of the bulge RR Lyrae is different from that of the bulk of the bulge population as traced by the red clump giants that appear to be significantly more He-rich. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 179.B-2002 and 298.D-5048.
High pressure die casting of Fe-based metallic glass.
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-10-11
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications.
High pressure die casting of Fe-based metallic glass
NASA Astrophysics Data System (ADS)
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-10-01
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications.
High pressure die casting of Fe-based metallic glass
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-01-01
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications. PMID:27725780
NASA Astrophysics Data System (ADS)
Pathak, Arup Kumar
2014-12-01
An explicit analytical expression has been obtained for vertical detachment energy (VDE) that can be used to calculate the same over a wide range (both stable and unstable regions) of cluster sizes including the bulk from the knowledge of VDE for a finite number of stable clusters (n = 16-23). The calculated VDE for the bulk is found to be very good in agreement (within 1%) with the available experimental result and the domain of instability lies between n = 0 and n = 15 for the hydrated clusters, PO3 -4 . nH2O. The minimum number (n0) of water molecules needed to stabilise the phosphate anion is 16. We are able to explain the origin of solvent-berg model and anomalous conductivity from the knowledge of first stable cluster. We have also provided a scheme to calculate the radius of the solvent-berg for phosphate anion. The calculated conductivity using Stokes-Einstein relation and the radius of solvent-berg is found to be very good in agreement (within 4%) with the available experimental results.
Material Parameters for Creep Rupture of Austenitic Stainless Steel Foils
NASA Astrophysics Data System (ADS)
Osman, H.; Borhana, A.; Tamin, M. N.
2014-08-01
Creep rupture properties of austenitic stainless steel foil, 347SS, used in compact recuperators have been evaluated at 700 °C in the stress range of 54-221 MPa to establish the baseline behavior for its extended use. Creep curves of the foil show that the primary creep stage is brief and creep life is dominated by tertiary creep deformation with rupture lives in the range of 10-2000 h. Results are compared with properties of bulk specimens tested at 98 and 162 MPa. Thin foil 347SS specimens were found to have higher creep rates and higher rupture ductility than their bulk specimen counterparts. Power law relationship was obtained between the minimum creep rate and the applied stress with stress exponent value, n = 5.7. The value of the stress exponent is indicative of the rate-controlling deformation mechanism associated with dislocation creep. Nucleation of voids mainly occurred at second-phase particles (chromium-rich M23C6 carbides) that are present in the metal matrix by decohesion of the particle-matrix interface. The improvement in strength is attributed to the precipitation of fine niobium carbides in the matrix that act as obstacles to the movement of dislocations.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Electron Heat Flux in Pressure Balance Structures at Ulysses
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. Rom previous studies, PBSs are believed to be remnants of coronal plumes and be related to network activity such as magnetic reconnection in the photosphere. We investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. At 2001 AGU Spring meeting, we reported that PBSs have structures like current sheets or plasmoids, and suggested that they are associated with network activity at the base of polar plumes. In this paper, we have analyzed high-energy electron data at Ulysses/SWOOPS to see whether bi-directional electron flow exists and confirm the conclusions more precisely. As a result, although most events show a typical flux directed away from the Sun, we have obtained evidence that some PBSs show bi-directional electron flux and others show an isotropic distribution of electron pitch angles. The evidence shows that plasmoids are flowing away from the Sun, changing their flow direction dynamically in a way not caused by Alfven waves. From this, we have concluded that PBSs are generated due to network activity at the base of polar plumes and their magnetic structures axe current sheets or plasmoids.
Nagarajappa, Ramesh; Batra, Mehak; Sharda, Archana J; Asawa, Kailash; Sanadhya, Sudhanshu; Daryani, Hemasha; Ramesh, Gayathri
2015-01-01
To assess and compare the antimicrobial potential and determine the minimum inhibitory concentration (MIC) of Jasminum grandiflorum and Hibiscus rosa-sinensis extracts as potential anti-pathogenic agents in dental caries. Aqueous and ethanol (cold and hot) extracts prepared from leaves of Jasminum grandiflorum and Hibiscus rosa-sinensis were screened for in vitro antimicrobial activity against Streptococcus mutans and Lactobacillus acidophilus using the agar well diffusion method. The lowest concentration of every extract considered as the minimum inhibitory concentration (MIC) was determined for both test organisms. Statistical analysis was performed with one-way analysis of variance (ANOVA). At lower concentrations, hot ethanol Jasminum grandiflorum (10 μg/ml) and Hibiscus rosa-sinensis (25 μg/ml) extracts were found to have statistically significant (P≤0.05) antimicrobial activity against S. mutans and L. acidophilus with MIC values of 6.25 μg/ml and 25 μg/ml, respectively. A proportional increase in their antimicrobial activity (zone of inhibition) was observed. Both extracts were found to be antimicrobially active and contain compounds with therapeutic potential. Nevertheless, clinical trials on the effect of these plants are essential before advocating large-scale therapy.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
NASA Astrophysics Data System (ADS)
Schaperow, J.; Cooper, M. G.; Cooley, S. W.; Alam, S.; Smith, L. C.; Lettenmaier, D. P.
2017-12-01
As climate regimes shift, streamflows and our ability to predict them will change, as well. Elasticity of summer minimum streamflow is estimated for 138 unimpaired headwater river basins across the maritime western US mountains to better understand how climatologic variables and geologic characteristics interact to determine the response of summer low flows to winter precipitation (PPT), spring snow water equivalent (SWE), and summertime potential evapotranspiration (PET). Elasticities are calculated using log log linear regression, and linear reservoir storage coefficients are used to represent basin geology. Storage coefficients are estimated using baseflow recession analysis. On average, SWE, PET, and PPT explain about 1/3 of the summertime low flow variance. Snow-dominated basins with long timescales of baseflow recession are least sensitive to changes in SWE, PPT, and PET, while rainfall-dominated, faster draining basins are most sensitive. There are also implications for the predictability of summer low flows. The R2 between streamflow and SWE drops from 0.62 to 0.47 from snow-dominated to rain-dominated basins, while there is no corresponding increase in R2 between streamflow and PPT.
Evaluation of an active humidification system for inspired gas.
Roux, Nicolás G; Plotnikow, Gustavo A; Villalba, Darío S; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto
2015-03-01
The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate.
Reexamining the minimum viable population concept for long-lived species.
Shoemaker, Kevin T; Breisch, Alvin R; Jaycox, Jesse W; Gibbs, James P
2013-06-01
For decades conservation biologists have proposed general rules of thumb for minimum viable population size (MVP); typically, they range from hundreds to thousands of individuals. These rules have shifted conservation resources away from small and fragmented populations. We examined whether iteroparous, long-lived species might constitute an exception to general MVP guidelines. On the basis of results from a 10-year capture-recapture study in eastern New York (U.S.A.), we developed a comprehensive demographic model for the globally threatened bog turtle (Glyptemys muhlenbergii), which is designated as endangered by the IUCN in 2011. We assessed population viability across a wide range of initial abundances and carrying capacities. Not accounting for inbreeding, our results suggest that bog turtle colonies with as few as 15 breeding females have >90% probability of persisting for >100 years, provided vital rates and environmental variance remain at currently estimated levels. On the basis of our results, we suggest that MVP thresholds may be 1-2 orders of magnitude too high for many long-lived organisms. Consequently, protection of small and fragmented populations may constitute a viable conservation option for such species, especially in a regional or metapopulation context. © 2013 Society for Conservation Biology.
Density and lithospheric structure at Tyrrhena Patera, Mars, from gravity and topography data
NASA Astrophysics Data System (ADS)
Grott, M.; Wieczorek, M. A.
2012-09-01
The Tyrrhena Patera highland volcano, Mars, is associated with a relatively well localized gravity anomaly and we have carried out a localized admittance analysis in the region to constrain the density of the volcanic load, the load thickness, and the elastic thickness at the time of load emplacement. The employed admittance model considers loading of an initially spherical surface, and surface as well as subsurface loading is taken into account. Our results indicate that the gravity and topography data available at Tyrrhena Patera is consistent with the absence of subsurface loading, but the presence of a small subsurface load cannot be ruled out. We obtain minimum load densities of 2960 kg m-3, minimum load thicknesses of 5 km, and minimum load volumes of 0.6 × 106 km3. Photogeological evidence suggests that pyroclastic deposits make up at most 30% of this volume, such that the bulk of Tyrrhena Patera is likely composed of competent basalt. Best fitting model parameters are a load density of 3343 kg m-3, a load thickness of 10.8 km, and a load volume of 1.7 × 106 km3. These relatively large load densities indicate that lava compositions are comparable to those at other martian volcanoes, and densities are comparable to those of the martian meteorites. The elastic thickness in the region is constrained to be smaller than 27.5 km at the time of loading, indicating surface heat flows in excess of 24 mW m-2.
Electrical resisitivity of mechancially stablized earth wall backfill
NASA Astrophysics Data System (ADS)
Snapp, Michael; Tucker-Kulesza, Stacey; Koehn, Weston
2017-06-01
Mechanically stabilized earth (MSE) retaining walls utilized in transportation projects are typically backfilled with coarse aggregate. One of the current testing procedures to select backfill material for construction of MSE walls is the American Association of State Highway and Transportation Officials standard T 288: ;Standard Method of Test for Determining Minimum Laboratory Soil Resistivity.; T 288 is designed to test a soil sample's electrical resistivity which correlates to its corrosive potential. The test is run on soil material passing the No. 10 sieve and believed to be inappropriate for coarse aggregate. Therefore, researchers have proposed new methods to measure the electrical resistivity of coarse aggregate samples in the laboratory. There is a need to verify that the proposed methods yield results representative of the in situ conditions; however, no in situ measurement of the electrical resistivity of MSE wall backfill is established. Electrical resistivity tomography (ERT) provides a two-dimensional (2D) profile of the bulk resistivity of backfill material in situ. The objective of this study was to characterize bulk resistivity of in-place MSE wall backfill aggregate using ERT. Five MSE walls were tested via ERT to determine the bulk resistivity of the backfill. Three of the walls were reinforced with polymeric geogrid, one wall was reinforced with metallic strips, and one wall was a gravity retaining wall with no reinforcement. Variability of the measured resistivity distribution within the backfill may be a result of non-uniform particle sizes, thoroughness of compaction, and the presence of water. A quantitative post processing algorithm was developed to calculate mean bulk resistivity of in-situ backfill. Recommendations of the study were that the ERT data be used to verify proposed testing methods for coarse aggregate that are designed to yield data representative of in situ conditions. A preliminary analysis suggests that ERT may be utilized as construction quality assurance for thoroughness of compaction in MSE construction; however more data are needed at this time.
NASA Astrophysics Data System (ADS)
Kern, H.; Ivankina, T. I.; Nikitin, A. N.; Lokajíček, T.; Pros, Z.
2008-10-01
Elastic anisotropy is an important property of crustal and mantle rocks. This study investigates the contribution of oriented microcracks and crystallographic (LPO) and shape preferred orientation (SPO) to the bulk elastic anisotropy of a strongly foliated biotite gneiss, using different methodologies. The rock is felsic in composition (about 70 vol.% SiO 2) and made up by about 40 vol.% quartz, 37 vol.% plagioclase and 23 vol.% biotite. Measurements of compressional (Vp) and shear wave (Vs) velocities on a sample cube in the three foliation-related structural directions (up to 600 MPa) and of the 3D P-wave velocity distribution on a sample sphere (up to 200 MPa) revealed a strong pressure sensitivity of Vp, Vs and P-wave anisotropy in the low pressure range. A major contribution to bulk anisotropy is from biotite. Importantly, intercrystalline and intracrystalline cracks are closely linked to the morphologic sheet plane (001) of the biotite minerals, leading to very high anisotropy at low pressure. Above about 150 MPa the effect of cracks is almost eliminated, due to progressive closure of microcracks. The residual (pressure-independent) part of velocity anisotropy is mainly caused by the strong alignment of the platy biotite minerals, displaying a strong SPO and LPO. Calculation of the 3D velocity distribution based on neutron diffraction texture measurements of biotite, quartz, and plagioclase and their single-crystal properties give evidence for an important contribution of the biotite LPO to the intrinsic velocity anisotropy, confirming the experimental findings that maximum and minimum velocities and shear wave splitting are closely related to foliation. Comparison of the LPO-based calculated anisotropy (about 8%) with measured intrinsic anisotropy (about 15% at 600 MPa) give hints for a major contribution of SPO to the bulk anisotropy of the rock.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Theory of Financial Risk and Derivative Pricing
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2009-01-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Theory of Financial Risk and Derivative Pricing - 2nd Edition
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2003-12-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.
MRI brain tumor segmentation based on improved fuzzy c-means method
NASA Astrophysics Data System (ADS)
Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo
2009-10-01
This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
Stochastic investigation of wind process for climatic variability identification
NASA Astrophysics Data System (ADS)
Deligiannis, Ilias; Tyrogiannis, Vassilis; Daskalou, Olympia; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris
2016-04-01
The wind process is considered one of the hydrometeorological processes that generates and drives the climate dynamics. We use a dataset comprising hourly wind records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Stochastic investigation of precipitation process for climatic variability identification
NASA Astrophysics Data System (ADS)
Sotiriadou, Alexia; Petsiou, Amalia; Feloni, Elisavet; Kastis, Paris; Iliopoulou, Theano; Markonis, Yannis; Tyralis, Hristos; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris
2016-04-01
The precipitation process is important not only to hydrometeorology but also to renewable energy resources management. We use a dataset consisting of daily and hourly records around the globe to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.
2011-01-01
This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
NASA Astrophysics Data System (ADS)
Rahman, Mohamed Abd; Yeakub Ali, Mohammad; Saddam Khairuddin, Amir
2017-03-01
This paper presents the study on vibration and surface roughness of Inconel 718 workpiece produced by micro end-milling using Mikrotools Integrated Multi-Process machine tool DT-110 with control parameters; spindle speed (15000 rpm and 30000 rpm), feed rate (2 mm/min and 4 mm/min) and depth of cut (0.10 mm and 0.15mm). The vibration was measured using DYTRAN accelerometer instrument and the average surface roughness Ra was measured using Wyko NT1100. The analysis of variance (ANOVA) by using Design Expert software revealed that feed rate and depth of cut are the most significant factors on vibration meanwhile for average surface roughness, Ra, spindle speed is the most significant factor.
Estimation of the simple correlation coefficient.
Shieh, Gwowen
2010-11-01
This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Goldstein, Melvyn L.; Roberts, D. Aaron
1990-01-01
Assuming that the slab and isotropic models of solar wind turbulence need modification (largely due to the observed anisotropy of the interplanetary fluctuations and the results of laboratory plasma experiments), this paper proposes a model of the solar wind. The solar wind is seen as a fluid which contains both classical transverse Alfvenic fluctuations and a population of quasi-transverse fluctuations. In quasi-two-dimensional turbulence, the pitch angle scattering by resonant wave-particle interactions is suppressed, and the direction of minimum variance of interplanetary fluctuations is parallel to the mean magnetic field. The assumed incompressibility is consistent with the fact that the density fluctuations are small and anticorrelated, and that the total pressure at small scales is nearly constant.
Assumption-free estimation of the genetic contribution to refractive error across childhood.
Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy
2015-01-01
Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
NASA Astrophysics Data System (ADS)
Ream, J. B.; Walker, R. J.; Ashour-Abdalla, M.; El-Alaoui, M.
2011-12-01
We performed a global MHD simulation of a substorm event on 14 September 2004 in order to investigate the link between Pi2 generation and dipolarization fronts. Pi2 pulsations (T = 40-150 s) measured by ground-based instruments are typically used as an indicator of substorm onset, therefore, understanding how and where they are generated is vital to understanding the series of events leading up to onset. Kepko et al. [1999] suggested that the compression regions and velocity variations associated with earthward propagating dipolarization fronts directly drive Pi2 pulsations. Similarly, Panov et al. [2011] suggested that Pi2 pulsations are generated by the overshoot and rebound of bursty bulk flows. Dipolarization fronts are step-wise enhancements in Bz which are associated with fast (>100km/s) earthward flows and are followed by tailward expansion due to pile-up at the high pressure region where the magnetic field lines transition from a stretched to a dipolar configuration. Cao et al. [2009] have presented observations from Double Star (TC1), Cluster 4 and Polar of a substorm with onset at 18:22 UT. During this event a dipolarization front was observed by Double Star at ~18:25, and dipolarization associated expansion was observed by Cluster 4 at ~18:50 and Polar at ~18:55 UT. The spacecraft were positioned at (-10.2, -1.6, 1.2), (-16.4, 1.6, 2.2) and (-7.5, -1.8, -4.9) RE in GSM coordinates respectively. The simulation was carried out with the UCLA global MHD code [El-Alaoui (2001), Raeder (1998)], using Geotail, located near the bow shock at ~24 RE, as the solar wind monitor. The solar wind magnetic field data were rotated into a minimum variance frame to be used as input for the simulation. The results from the simulation have been compared to observations and do a good job reproducing the structures observed by all three satellites. Around the time of onset, we have identified a dipolarization front near midnight which originates at ~12 RE. We show that as the dipolarization front begins to travel earthward, Pi2 fluctuations are generated in the pressure and velocity components which propagate along the plasma sheet into the inner magnetosphere. Inside ~-7 RE the frequency seen in the velocity perturbations is matched by perturbations in pressure and magnetic field components. References Ashour-Abdalla, M., et al (2011), Observations and simulations of non-local acceleration of electrons in magnetotail magnetic reconnection events, Nature Physics, vol.7. Cao, X., et al. (2008), Multispacecraft and ground-based observations of substorm timing and activations: Two case studies, J. Geophys. Res., 113, A07S25. El-Alaoui, M. (2001), Current disruption during November 24, 1996 substorm, J. Geophys. Res., 106, 6229- 6245. Kepko, L. and M. Kivelson (1999) Generation of Pi2 pulsations by bursty bulk flows, J. Geophys Res. 104(A11),25,021-25,034. Panov, E. V., et al (2010), Multiple overshoot and rebound of a bursty bulk flow, Geophys. Res. Lett., 37, L08103. Raeder, J., et al. (1998), The Geospace Environment Modeling Grand Challenge: Results from a global geospace circulation model, J. Geophys. Res., 103, 14,787.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaoyu, E-mail: xiaoyu.yang@wdc.com; Chen, Lifan; Han, Hongmei
The impact of the fluorine-based reactive ion etch (RIE) process on the structural, electrical, and magnetic properties of NiFe and CoNiFe-plated materials was investigated. Several techniques, including X-ray fluorescence, 4-point-probe, BH looper, transmission electron microscopy (TEM), and electron energy loss spectroscopy (EELS), were utilized to characterize both bulk film properties such as thickness, average composition, Rs, ρ, Bs, Ms, and surface magnetic “dead” layers' properties such as thickness and element concentration. Experimental data showed that the majority of Rs and Bs changes of these bulk films were due to thickness reduction during exposure to the RIE process. ρ and Msmore » change after taking thickness reduction into account were negligible. The composition of the bulk films, which were not sensitive to surface magnetic dead layers with nano-meter scale, showed minimum change as well. It was found by TEM and EELS analysis that although both before and after RIE there were magnetic dead layers on the top surface of these materials, the thickness and element concentration of the layers were quite different. Prior to RIE, dead layer was actually native oxidation layers (about 2 nm thick), while after RIE dead layer consisted of two sub-layers that were about 6 nm thick in total. Sub-layer on the top was native oxidation layer, while the bottom layer was RIE “damaged” layer with very high fluorine concentration. Two in-situ RIE approaches were also proposed and tested to remove such damaged sub-layers.« less
Surface conduction of topological Dirac electrons in bulk insulating Bi2Se3
NASA Astrophysics Data System (ADS)
Fuhrer, Michael
2013-03-01
The three dimensional strong topological insulator (STI) is a new phase of electronic matter which is distinct from ordinary insulators in that it supports on its surface a conducting two-dimensional surface state whose existence is guaranteed by topology. I will discuss experiments on the STI material Bi2Se3, which has a bulk bandgap of 300 meV, much greater than room temperature, and a single topological surface state with a massless Dirac dispersion. Field effect transistors consisting of thin (3-20 nm) Bi2Se3 are fabricated from mechanically exfoliated from single crystals, and electrochemical and/or chemical gating methods are used to move the Fermi energy into the bulk bandgap, revealing the ambipolar gapless nature of transport in the Bi2Se3 surface states. The minimum conductivity of the topological surface state is understood within the self-consistent theory of Dirac electrons in the presence of charged impurities. The intrinsic finite-temperature resistivity of the topological surface state due to electron-acoustic phonon scattering is measured to be ~60 times larger than that of graphene largely due to the smaller Fermi and sound velocities in Bi2Se3, which will have implications for topological electronic devices operating at room temperature. As samples are made thinner, coherent coupling of the top and bottom topological surfaces is observed through the magnitude of the weak anti-localization correction to the conductivity, and, in the thinnest Bi2Se3 samples (~ 3 nm), in thermally-activated conductivity reflecting the opening of a bandgap.
Spray-dried chitosan as a direct compression tableting excipient.
Chinta, Dakshinamurthy Devanga; Graves, Richard A; Pamujula, Sarala; Praetorius, Natalie; Bostanian, Levon A; Mandal, Tarun K
2009-01-01
The objective of this study was to prepare and evaluate a novel spray-dried tableting excipient using a mixture of chitosan and lactose. Three different grades of chitosan (low-, medium-, and high-molecular-weight) were used for this study. Propranolol hydrochloride was used as a model drug. A specific amount of chitosan (1, 1.9, and 2.5 g, respectively) was dissolved in 50 mL of an aqueous solution of citric acid (1%) and later mixed with 50 mL of an aqueous solution containing lactose (20, 19.1, and 18.5 g, respectively) and propanolol (2.2 g). The resultant solution was sprayed through a laboratory spray drier at 1.4 mL/min. The granules were evaluated for bulk density, tap density, Carr index, particle size distribution, surface morphology, thermal properties, and tableting properties. Bulk density of the granules decreased from 0.16 to 0.13 g/mL when the granules were prepared using medium- or high-molecular-weight chitosan compared with the low-molecular-weight chitosan. The relative proportion of chitosan also showed a significant effect on the bulk density. The granules prepared with 1 g of low-molecular-weight chitosan showed the minimum Carr index (11.1%) indicating the best flow properties among all five formulations. All three granules prepared with 1 g chitosan, irrespective of their molecular weight, showed excellent flow properties. Floating tablets prepared by direct compression of these granules with sodium bicarbonate showed 50% drug release between 30 and 35 min. In conclusion, the spray-dried granules prepared with chitosan and lactose showed excellent flow properties and were suitable for tableting.
NASA Astrophysics Data System (ADS)
Hilpert, Markus; Rasmuson, Anna; Johnson, William P.
2017-07-01
Colloid transport in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their downgradient translation relative to colloids in bulk fluid. Near-surface fluid domain colloids may reenter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via primary minimum interactions, or they may move along a grain-to-grain contact to the near-surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization. Colloid movement is described by a Markov chain, i.e., a sequence of trials in a 1-D network of unit cells, which contain a pore and a grain. Using combinatorial analysis, which utilizes the binomial coefficient, we derive the residence time distribution, i.e., an inventory of the discrete colloid travel times through the network and of their probabilities to occur. To parameterize the network model, we performed mechanistic pore-scale simulations in a single unit cell that determined the likelihoods and timescales associated with the above colloid mass transfer processes. We found that intergrain transport of colloids in the near-surface fluid domain can cause extended tailing, which has traditionally been attributed to hydrodynamic dispersion emanating from flow tortuosity of solute trajectories.
Adding Spatially Correlated Noise to a Median Ionosphere
NASA Astrophysics Data System (ADS)
Holmes, J. M.; Egert, A. R.; Dao, E. V.; Colman, J. J.; Parris, R. T.
2017-12-01
We describe a process for adding spatially correlated noise to a background ionospheric model, in this case the International Reference Ionosphere (IRI). Monthly median models do a good job describing bulk features of the ionosphere in a median sense. It is well known that the ionosphere almost never actually looks like its median. For the purposes of constructing an Operational System Simulation Experiment, it may be desirable to construct an ionosphere more similar to a particular instant, hour, or day than to the monthly median. We will examine selected data from the Global Ionosphere Radio Observatory (GIRO) database and estimate the amount of variance captured by the IRI model. We will then examine spatial and temporal correlations within the residuals. This analysis will be used to construct a temporal-spatial gridded ionosphere that represents a particular instantiation of those statistics.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
2011-01-01
Background Distal radius fracture is a common injury and may result in substantial dysfunction and pain. The purpose was to investigate the relationship between distal radius fracture malunion and arm-related disability. Methods The prospective population-based cohort study included 143 consecutive patients above 18 years with an acute distal radius fracture treated with closed reduction and either cast (55 patients) or external and/or percutaneous pin fixation (88 patients). The patients were evaluated with the disabilities of the arm, shoulder and hand (DASH) questionnaire at baseline (concerning disabilities before fracture) and one year after fracture. The 1-year follow-up included the SF-12 health status questionnaire and clinical and radiographic examinations. Patients were classified into three hypothesized severity categories based on fracture malunion; no malunion, malunion involving either dorsal tilt (>10 degrees) or ulnar variance (≥1 mm), and combined malunion involving both dorsal tilt and ulnar variance. Multivariate regression analyses were performed to determine the relationship between the 1-year DASH score and malunion and the relative risk (RR) of obtaining DASH score ≥15 and the number needed to harm (NNH) were calculated. Results The mean DASH score at one year after fracture was significantly higher by a minimum of 10 points with each malunion severity category. The RR for persistent disability was 2.5 if the fracture healed with malunion involving either dorsal tilt or ulnar variance and 3.7 if the fracture healed with combined malunion. The NNH was 2.5 (95% CI 1.8-5.4). Malunion had a statistically significant relationship with worse SF-12 score (physical health) and grip strength. Conclusion Malunion after distal radius fracture was associated with higher arm-related disability regardless of age. PMID:21232088
Climate Drivers of Blue Intensity from Two Eastern North American Conifers
NASA Astrophysics Data System (ADS)
Rayback, S. A.; Kilbride, J.; Pontius, J.; Tait, E.; Little, J.
2016-12-01
Gaining a comprehensive understanding of the climatic factors that drive tree radial growth over time is important in the context of global climate change. Herein, we explore minimum blue intensity (BI), a measure of lignin context in the latewood of tree rings, with the objective of developing BI chronologies for two eastern North American conifers to identify and explore climatic drivers and to compare BI-climate relationships to those of tree-ring widths (TRW). Using dendrochronological techniques, Tsuga canadensis and Picea rubens TRW and BI chronologies were developed at Abbey Pond (ABP) and The Cape National Research Area (CAPE), Vermont, USA, respectively. Climate drivers (1901-2010) were investigated using correlation and response function analyses and generalized linear mixed models. The ABP T. canadensis BI model explained the highest amount of variance (R2 = 0.350, adjR2=0.324) with September Tmin and June total percent cloudiness as predictors. The ABP T. canadensis TRW model explained 34% of the variance (R2 = 0.340, adjR2=0.328) with summer total precipitation and June PDSI as predictors. The CAPE P. rubens TRW and BI models explained 31% of the variance (R2 = 0.33, adjR2=0.310), based on p July Tmax, p August Tmean and fall Tmin as predictors, and 7% (R2 = 0.068, adjR2=0.060) based on Spring Tmin as the predictor, respectively. Moving window analyses confirm the moisture sensitivity of T. canadensis TRW and now BI and suggest an extension of the growing season. Similarly, P. rubens TRW responded consistently negative to high growing season temperatures, but TRW and BI benefited from a longer growing season. This study introduces two new BI chronologies, the first from northeastern North America, and highlights shifts underway in tree response to changing climate.
Isotope scattering and phonon thermal conductivity in light atom compounds: LiH and LiF
Lindsay, Lucas R.
2016-11-08
Engineered isotope variation is a pathway toward modulating lattice thermal conductivity (κ) of a material through changes in phonon-isotope scattering. The effects of isotope variation on intrinsic thermal resistance is little explored, as varying isotopes have relatively small differences in mass and thus do not affect bulk phonon dispersions. However, for light elements isotope mass variation can be relatively large (e.g., hydrogen and deuterium). Using a first principles Peierls-Boltzmann transport equation approach the effects of isotope variance on lattice thermal transport in ultra-low-mass compound materials LiH and LiF are characterized. The isotope mass variance modifies the intrinsic thermal resistance viamore » modulation of acoustic and optic phonon frequencies, while phonon-isotope scattering from mass disorder plays only a minor role. This leads to some unusual cases where values of isotopically pure systems ( 6LiH, 7Li 2H and 6LiF) are lower than the values from their counterparts with naturally occurring isotopes and phonon-isotope scattering. However, these differences are relatively small. The effects of temperature-driven lattice expansion on phonon dispersions and calculated κ are also discussed. This work provides insight into lattice thermal conductivity modulation with mass variation and the interplay of intrinsic phonon-phonon and phonon-isotope scattering in interesting light atom systems.« less
Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D
2017-12-01
In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
How many days of accelerometer monitoring predict weekly physical activity behaviour in obese youth?
Vanhelst, Jérémy; Fardy, Paul S; Duhamel, Alain; Béghin, Laurent
2014-09-01
The aim of this study was to determine the type and the number of accelerometer monitoring days needed to predict weekly sedentary behaviour and physical activity in obese youth. Fifty-three obese youth wore a triaxial accelerometer for 7 days to measure physical activity in free-living conditions. Analyses of variance for repeated measures, Intraclass coefficient (ICC) and regression linear analyses were used. Obese youth spent significantly less time in physical activity on weekends or free days compared with school days. ICC analyses indicated a minimum of 2 days is needed to estimate physical activity behaviour. ICC were 0·80 between weekly physical activity and weekdays and 0·92 between physical activity and weekend days. The model has to include a weekday and a weekend day. Using any combination of one weekday and one weekend day, the percentage of variance explained is >90%. Results indicate that 2 days of monitoring are needed to estimate the weekly physical activity behaviour in obese youth with an accelerometer. Our results also showed the importance of taking into consideration school day versus free day and weekday versus weekend day in assessing physical activity in obese youth. © 2013 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS
O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon
2015-01-01
Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792
CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan
Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less
Data mining on long-term barometric data within the ARISE2 project
NASA Astrophysics Data System (ADS)
Hupe, Patrick; Ceranna, Lars; Pilger, Christoph
2016-04-01
The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.
Noh, Wonjung; Seomun, Gyeongae
2015-06-01
This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.
Examining the Prey Mass of Terrestrial and Aquatic Carnivorous Mammals: Minimum, Maximum and Range
Tucker, Marlee A.; Rogers, Tracey L.
2014-01-01
Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems. PMID:25162695
Examining the prey mass of terrestrial and aquatic carnivorous mammals: minimum, maximum and range.
Tucker, Marlee A; Rogers, Tracey L
2014-01-01
Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Erickson, Darin J; Ma, Linan; Tobler, Amy L; Komro, Kelli A
2007-09-01
We examined effects of state statutory changes in DUI fine or jail penalties for firsttime offenders from 1976 to 2002. A quasi-experimental time-series design was used (n=324 monthly observations). Four outcome measures of drivers involved in alcohol-related fatal crashes are: single-vehicle nighttime, low BAC (0.01-0.07g/dl), medium BAC (0.08-0.14g/dl), high BAC (>/=0.15g/dl). All analyses of BAC outcomes included multiple imputation procedures for cases with missing data. Comparison series of non-alcohol-related crashes were included to efficiently control for effects of other factors. Statistical models include state-specific Box-Jenkins ARIMA models, and pooled general linear mixed models. Twenty-six states implemented mandatory minimum fine policies and 18 states implemented mandatory minimum jail penalties. Estimated effects varied widely from state to state. Using variance weighted meta-analysis methods to aggregate results across states, mandatory fine policies are associated with an average reduction in fatal crash involvement by drivers with BAC>/=0.08g/dl of 8% (averaging 13 per state per year). Mandatory minimum jail policies are associated with a decline in single-vehicle nighttime fatal crash involvement of 6% (averaging 5 per state per year), and a decline in low-BAC cases of 9% (averaging 3 per state per year). No significant effects were observed for the other outcome measures. The overall pattern of results suggests a possible effect of mandatory fine policies in some states, but little effect of mandatory jail policies.
2011-01-01
Background Evidence is mounting regarding the clinically significant effect of temperature on blood pressure. Methods In this cross-sectional study the authors obtained minimum and maximum temperatures and their respective previous week variances at the geographic locations of the self-reported residences of 26,018 participants from a national cohort of blacks and whites, aged 45+. Linear regression of data from 20,623 participants was used in final multivariable models to determine if these temperature measures were associated with levels of systolic or diastolic blood pressure, and whether these relations were modified by stroke-risk region, race, education, income, sex hypertensive medication status, or age. Results After adjustment for confounders, same-day maximum temperatures 20°F lower had significant associations with 1.4 mmHg (95% CI: 1.0, 1.9) higher systolic and 0.5 mmHg (95% CI: 0.3, 0.8) higher diastolic blood pressures. Same-day minimum temperatures 20°F lower had a significant association with 0.7 mmHg (95% CI: 0.3, 1.0) higher systolic blood pressures but no significant association with diastolic blood pressure differences. Maximum and minimum previous-week temperature variabilities showed significant but weak relationships with blood pressures. Parameter estimates showed effect modification of negligible magnitude. Conclusions This study found significant associations between outdoor temperature and blood pressure levels, which remained after adjustment for various confounders including season. This relationship showed negligible effect modification. PMID:21247466
NASA Astrophysics Data System (ADS)
De Linage, C.; Famiglietti, J. S.; Randerson, J. T.
2013-12-01
Floods and droughts frequently affect the Amazon River basin, impacting the transportation, river navigation, agriculture, economy and the carbon balance and biodiversity of several South American countries. The present study aims to find the main variables controlling the natural interannual variability of terrestrial water storage in the Amazon region and to propose a modeling framework for flood and drought forecasting. We propose three simple empirical models using a linear combination of lagged spatial averages of central Pacific (Niño 4 index) and tropical North Atlantic (TNAI index) sea surface temperatures (SST) to predict a decade-long record of 3°, monthly terrestrial water storage anomalies (TWSA) observed by the Gravity Recovery And Climate Experiment (GRACE) mission. In addition to a SST forcing term, the models included a relaxation term to simulate the memory of water storage anomalies in response to external variability in forcing. Model parameters were spatially-variable and individually optimized for each 3° grid cell. We also investigated the evolution of the predictive capability of our models with increasing minimum lead times for TWSA forecasts. TNAI was the primary external forcing for the central and western regions of the southern Amazon (35% of variance explained with a 3-month forecast), whereas Niño 4 was dominant in the northeastern part of the basin (61% of variance explained with a 3-month forecast). Forcing the model with a combination of the two indices improved the fit significantly (p<0.05) for at least 64% of the grid cells, compared to models forced solely with Niño 4 or TNAI. The combined model was able to explain 43% of the variance in the Amazon basin as a whole with a 3-month lead time. While 66% of the observed variance was explained in the northeastern Amazon, only 39% of the variance was captured by the combined model in the central and western regions, suggesting that other, more local, forcing sources were important in these regions. The predictive capability of the combined model was monotonically degraded with increasing lead times. Degradation was smaller in the northeastern Amazon (where 49% of the variance was explained using a 8-month lead time versus 69% for a 1 month lead time) compared to the western and central regions of southern Amazon (where 22% of the variance was explained at 8 months versus 43% at 1 month). Our model may provide early warning information about flooding in the northeastern region of the Amazon basin, where floodplain areas are extensive and the sensitivity of floods to external SST forcing was shown to be high. This work also strengthens our understanding of the mechanisms regulating interannual variability in Amazon fires, as TWSA deficits may subsequently lead to atmospheric water vapor deficits and reduced cloudiness via water-limited evapotranspiration. Finally, this work helps to bridge the gap between the current GRACE mission and the follow-on gravity mission.
Simulating future uncertainty to guide the selection of survey designs for long-term monitoring
Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.
Changing climate and endangered high mountain ecosystems in Colombia.
Ruiz, Daniel; Moreno, Hernán Alonso; Gutiérrez, María Elena; Zapata, Paula Andrea
2008-07-15
High mountain ecosystems are among the most sensitive environments to changes in climatic conditions occurring on global, regional and local scales. The article describes the changing conditions observed over recent years in the high mountain basin of the Claro River, on the west flank of the Colombian Andean Central mountain range. Local ground truth data gathered at 4150 m, regional data available at nearby weather stations, and satellite info were used to analyze changes in the mean and the variance, and significant trends in climatic time series. Records included minimum, mean and maximum temperatures, relative humidity, rainfall, sunshine, and cloud characteristics. In high levels, minimum and maximum temperatures during the coldest days increased at a rate of about 0.6 degrees C/decade, whereas maximum temperatures during the warmest days increased at a rate of about 1.3 degrees C/decade. Rates of increase in maximum, mean and minimum diurnal temperature range reached 0.6, 0.7, and 0.5 degrees C/decade. Maximum, mean and minimum relative humidity records showed reductions of about 1.8, 3.9 and 6.6%/decade. The total number of sunny days per month increased in almost 2.1 days. The headwaters exhibited no changes in rainfall totals, but evidenced an increased occurrence of unusually heavy rainfall events. Reductions in the amount of all cloud types over the area reached 1.9%/decade. In low levels changes in mean monthly temperatures and monthly rainfall totals exceeded + 0.2 degrees C and - 4% per decade, respectively. These striking changes might have contributed to the retreat of glacier icecaps and to the disappearance of high altitude water bodies, as well as to the occurrence and rapid spread of natural and man-induced forest fires. Significant reductions in water supply, important disruptions of the integrity of high mountain ecosystems, and dramatic losses of biodiversity are now a steady menu of the severe climatic conditions experienced by these fragile tropical environments.
Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma
2016-11-15
Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH₂O EC-10, ECH₂O EC-20, ECH₂O EC-5, and ECH₂O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH₂O EC-5 and ECH₂O TE, which also performed surprisingly well in saline conditions.
Apical extrusion of debris by supplementary files used for retreatment: An ex vivo comparative study
Pawar, Ajinkya M.; Pawar, Mansing; Metzger, Zvi; Thakur, Bhagyashree
2016-01-01
Aim: This study evaluated whether using supplementary files for removing root canal filling residues after ProTaper Universal Retreatment files (RFs) increased the debris extrusion apically. Materials and Methods: Eighty mandibular premolars with single root and canal were instrumented with ProTaper Universal rotary system (SX-F3) and obturated. The samples were divided randomly into four groups (n = 20). Group 1 served as a control; only ProTaper Universal RFs D1–D3 were used, and the extruded debris was weighed. Groups 2, 3, and 4 were the experimental groups, receiving a twofold retreatment protocol: Removal of the bulk, followed by the use of supplementary files. The bulk was removed by RFs, followed by the use of ProTaper NEXT (PTN), WaveOne (WO), and Self-Adjusting File (SAF) for removal of the remaining root filling residues. Debris extruded apically were weighed and compared to the control group. Statistical analysis was performed using one-way analysis of variance (ANOVA) and post hoc Tukey's test. Results: All the three experimental groups presented significant difference (P < .01). The post hoc Tukey's test confirmed that Group 4 (SAF) exhibited significantly less (P < .01) debris extrusion between the three groups tested. Conclusion: SAF results in less extrusion of debris when used as supplementary file to remove root-filling residues, compared to WO and PTN. PMID:27099416
Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma
2016-01-01
Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263
On the kinetic and equilibrium shapes of icosahedral Al 71Pd 19Mn 10 quasicrystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senabulya, Nancy; Xiao, Xianghui; Han, Insung
The dynamics of growth and relaxation of icosahedral single quasicrystals in a liquid phase were investigated using in situ synchrotron-based X-ray tomography. Here, our 4D studies (i.e., space- and time-resolved) provide direct evidence that indicates the growth process of an Al 71Pd 19Mn 10 quasicrystal is governed predominantly by bulk transport rather than attachment kinetics. This work is in agreement with theoretical predictions, which show that the pentagonal dodecahedron is not the minimum energy structure in Al-Pd-Mn icosahedral quasicrystals, but merely a growth shape characterized by non-zero anisotropic velocity. This transient shape transforms into a truncated dodecahedral Archimedian polyhedron oncemore » equilibrium has been attained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jie, E-mail: jie.yang@yale.edu; Cui, Sharon; Ma, T. P.
2013-11-25
We investigate the energy levels of electron traps in AlGaN/GaN high electron mobility transistors by the use of electron tunneling spectroscopy. Detailed analysis of a typical spectrum, obtained in a wide gate bias range and with both bias polarities, suggests the existence of electron traps both in the bulk of AlGaN and at the AlGaN/GaN interface. The energy levels of the electron traps have been determined to lie within a 0.5 eV band below the conduction band minimum of AlGaN, and there is strong evidence suggesting that these traps contribute to Frenkel-Poole conduction through the AlGaN barrier.
On the kinetic and equilibrium shapes of icosahedral Al 71Pd 19Mn 10 quasicrystals
Senabulya, Nancy; Xiao, Xianghui; Han, Insung; ...
2018-03-06
The dynamics of growth and relaxation of icosahedral single quasicrystals in a liquid phase were investigated using in situ synchrotron-based X-ray tomography. Here, our 4D studies (i.e., space- and time-resolved) provide direct evidence that indicates the growth process of an Al 71Pd 19Mn 10 quasicrystal is governed predominantly by bulk transport rather than attachment kinetics. This work is in agreement with theoretical predictions, which show that the pentagonal dodecahedron is not the minimum energy structure in Al-Pd-Mn icosahedral quasicrystals, but merely a growth shape characterized by non-zero anisotropic velocity. This transient shape transforms into a truncated dodecahedral Archimedian polyhedron oncemore » equilibrium has been attained.« less
Atomistic Modeling of Surface and Bulk Properties of Cu, Pd and the Cu-Pd System
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Abel, Phillip; Mosca, Hugo O.; Gray, Hugh R. (Technical Monitor)
2002-01-01
The BFS (Bozzolo-Ferrante-Smith) method for alloys is applied to the study of the Cu-Pd system. A variety of issues are analyzed and discussed, including the properties of pure Cu or Pd crystals (surface energies, surface relaxations), Pd/Cu and Cu/Pd surface alloys, segregation of Pd (or Cu) in Cu (or Pd), concentration dependence of the lattice parameter of the high temperature fcc CuPd solid solution, the formation and properties of low temperature ordered phases, and order-disorder transition temperatures. Emphasis is made on the ability of the method to describe these properties on the basis of a minimum set of BFS universal parameters that uniquely characterize the Cu-Pd system.
Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.
2015-09-04
In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less
NASA Astrophysics Data System (ADS)
Roehl, Jason L.
Diffusion of point defects on crystalline surfaces and in their bulk is an important and ubiquitous phenomenon affecting film quality, electronic properties and device functionality. A complete understanding of these diffusion processes enables one to predict and then control those processes. Such understanding includes knowledge of the structural, energetic and electronic properties of these native and non-native point defect diffusion processes. Direct experimental observation of the phenomenon is difficult and microscopic theories of diffusion mechanisms and pathways abound. Thus, knowing the nature of diffusion processes, of specific point defects in given materials, has been a challenging task for analytical theory as well as experiment. The recent advances in computing technology have been a catalyst for the rise of a third mode of investigation. The advent of tremendous computing power, breakthroughs in algorithmic development in computational applications of electronic density functional theory now enables direct computation of the diffusion process. This thesis demonstrates such a method applied to several different examples of point defect diffusion on the (001) surface of gallium arsenide (GaAs) and the bulk of cadmium telluride (CdTe) and cadmium sulfide (CdS). All results presented in this work are ab initio, total-energy pseudopotential calculations within the local density approximation to density-functional theory. Single particle wavefunctions were expanded in a plane-wave basis and reciprocal space k-point sampling was achieved by Monkhorst-Pack generated k-point grids. Both surface and bulk computations employed a supercell approach using periodic boundary conditions. Ga adatom adsorption and diffusion processes were studied on two reconstructions of the GaAs(001) surface including the c(4x4) and c(4x4)-heterodimer surface reconstructions. On the GaAs(001)- c(4x4) surface reconstruction, two distinct sets of minima and transition sites were discovered for a Ga adatom relaxing from heights of 3 and 0.5 A from the surface. These two sets show significant differences in the interaction of the Ga adatom with surface As dimers and an electronic signature of the differences in this interaction was identified. The energetic barriers to diffusion were computed between various adsorption sites. Diffusion profiles for native Cd and S, adatom and vacancy, and non-native interstitial adatoms of Te, Cu and Cl were investigated in bulk wurtzite CdS. The interstitial diffusion paths considered in this work were chosen parallel to c-axis as it represents the path encountered by defects diffusing from the CdTe layer. Because of the lattice mismatch between zinc-blende CdTe and hexagonal wurtzite CdS, the c-axis in CdS is normal to the CdTe interface. The global minimum and maximum energy positions in the bulk unit cell vary for different diffusing species. This results in a significant variation, in the bonding configurations and associated strain energies of different extrema positions along the diffusion paths for various defects. The diffusion barriers range from a low of 0.42 eV for an S interstitial to a high of 2.18 eV for a S vacancy. The computed 0.66 eV barrier for a Cu interstitial is in good agreement with experimental values in the range of 0.58 - 0.96 eV reported in the literature. There exists an electronic signature in the local density of states for the s- and d-states of the Cu interstitial at the global maximum and global minimum energy position. The work presented in this thesis is an investigation into diffusion processes for semiconductor bulk and surfaces. The work provides information about these processes at a level of control unavailable experimentally giving an elaborate description into physical and electronic properties associated with diffusion at its most basic level. Not only does this work provide information about GaAs, CdTe and CdS, it is intended to contribute to a foundation of knowledge that can be extended to other systems to expand our overall understanding into the diffusion process. (Abstract shortened by UMI.)
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.
Design and grayscale fabrication of beamfanners in a silicon substrate
NASA Astrophysics Data System (ADS)
Ellis, Arthur Cecil
2001-11-01
This dissertation addresses important first steps in the development of a grayscale fabrication process for multiple phase diffractive optical elements (DOS's) in silicon. Specifically, this process was developed through the design, fabrication, and testing of 1-2 and 1-4 beamfanner arrays for 5-micron illumination. The 1-2 beamfanner arrays serve as a test-of- concept and basic developmental step toward the construction of the 1-4 beamfanners. The beamfanners are 50 microns wide, and have features with dimensions of between 2 and 10 microns. The Iterative Annular Spectrum Approach (IASA) method, developed by Steve Mellin of UAH, and the Boundary Element Method (BEM) are the design and testing tools used to create the beamfanner profiles and predict their performance. Fabrication of the beamfanners required the techniques of grayscale photolithography and reactive ion etching (RIE). A 2-3micron feature size 1-4 silicon beamfanner array was fabricated, but the small features and contact photolithographic techniques available prevented its construction to specifications. A second and more successful attempt was made in which both 1-4 and 1-2 beamfanner arrays were fabricated with a 5-micron minimum feature size. Photolithography for the UAH array was contracted to MEMS-Optical of Huntsville, Alabama. A repeatability study was performed, using statistical techniques, of 14 photoresist arrays and the subsequent RIE process used to etch the arrays in silicon. The variance in selectivity between the 14 processes was far greater than the variance between the individual etched features within each process. Specifically, the ratio of the variance of the selectivities averaged over each of the 14 etch processes to the variance of individual feature selectivities within the processes yielded a significance level below 0.1% by F-test, indicating that good etch-to-etch process repeatability was not attained. One of the 14 arrays had feature etch-depths close enough to design specifications for optical testing, but 5- micron IR illumination of the 1-4 and 1-2 beamfanners yielded no convincing results of beam splitting in the detector plane 340 microns from the surface of the beamfanner array.
Spectroscopic and Mechanical Properties of a New Generation of Bulk Fill Composites
Monterubbianesi, Riccardo; Orsini, Giovanna; Tosi, Giorgio; Conti, Carla; Librando, Vito; Procaccini, Maurizio; Putignano, Angelo
2016-01-01
Objectives: The aims of this study were to in vitro evaluate the degree of conversion and the microhardness properties of five bulk fill resin composites; in addition, the performance of two curing lamps, used for composites polymerization, was also analyzed. Materials and Methods: The following five resin-based bulk fill composites were tested: SureFil SDR®, Fill Up!™, Filtek™, SonicFill™, and SonicFill2™. Samples of 4 mm in thickness were prepared using Teflon molds filled in one increment and light-polymerized using two LED power units. Ten samples for each composite were cured using Elipar S10 and 10 using Demi Ultra. Additional samples of SonicFill2, (3 and 5 mm-thick) were also tested. The degree of conversion (DC) was determined by Raman spectroscopy, while the Vickers microhardness (VMH) was evaluated using a microhardness tester. The experimental evaluation was carried out on top and bottom sides, immediately after curing (t0), and, on bottom, after 24 h (t24). Two-ways analysis of variance was applied to evaluate DC and VMH-values. In all analyses, the level of significance was set at p < 0.05. Results: All bulk fill resin composites recorded satisfactory DCs on top and bottom sides. At t0, the top of SDR and SonicFill2 showed the highest DCs-values (85.56 ± 9.52 and 85.47 ± 1.90, respectively), when cured using Elipar S10; using Demi Ultra, SonicFill2 showed the highest DCs-values (90.53 ± 2.18). At t0, the highest DCs-values of bottom sides were recorded by SDR (84.64 ± 11.68), when cured using Elipar S10, and Filtek (81.52 ± 4.14), using Demi Ultra. On top sides, Demi Ultra lamp showed significant higher DCs compared to the Elipar S10 (p < 0.05). SonicFill2 reached suitable DCs also on bottom of 5 mm-thick samples. At t0, VMH-values ranged between 24.4 and 69.18 for Elipar S10, and between 26.5 and 67.3 for Demi Ultra. Using both lamps, the lowest VMH-values were shown by SDR, while the highest values by SonicFill2. At t24, all DC and VMH values significantly increased. Conclusions: Differences in DC and VMH among materials are suggested to be material and curing lamp dependent. Even at t0, the three high viscosity bulk composites showed higher VMH than the flowable or dual curing composites. PMID:28082918
Spectroscopic and Mechanical Properties of a New Generation of Bulk Fill Composites.
Monterubbianesi, Riccardo; Orsini, Giovanna; Tosi, Giorgio; Conti, Carla; Librando, Vito; Procaccini, Maurizio; Putignano, Angelo
2016-01-01
Objectives: The aims of this study were to in vitro evaluate the degree of conversion and the microhardness properties of five bulk fill resin composites; in addition, the performance of two curing lamps, used for composites polymerization, was also analyzed. Materials and Methods: The following five resin-based bulk fill composites were tested: SureFil SDR®, Fill Up!™, Filtek™, SonicFill™, and SonicFill2™. Samples of 4 mm in thickness were prepared using Teflon molds filled in one increment and light-polymerized using two LED power units. Ten samples for each composite were cured using Elipar S10 and 10 using Demi Ultra. Additional samples of SonicFill2, (3 and 5 mm-thick) were also tested. The degree of conversion (DC) was determined by Raman spectroscopy, while the Vickers microhardness (VMH) was evaluated using a microhardness tester. The experimental evaluation was carried out on top and bottom sides, immediately after curing (t0), and, on bottom, after 24 h (t24). Two-ways analysis of variance was applied to evaluate DC and VMH-values. In all analyses, the level of significance was set at p < 0.05. Results: All bulk fill resin composites recorded satisfactory DCs on top and bottom sides. At t0, the top of SDR and SonicFill2 showed the highest DCs-values (85.56 ± 9.52 and 85.47 ± 1.90, respectively), when cured using Elipar S10; using Demi Ultra, SonicFill2 showed the highest DCs-values (90.53 ± 2.18). At t0, the highest DCs-values of bottom sides were recorded by SDR (84.64 ± 11.68), when cured using Elipar S10, and Filtek (81.52 ± 4.14), using Demi Ultra. On top sides, Demi Ultra lamp showed significant higher DCs compared to the Elipar S10 ( p < 0.05). SonicFill2 reached suitable DCs also on bottom of 5 mm-thick samples. At t0, VMH-values ranged between 24.4 and 69.18 for Elipar S10, and between 26.5 and 67.3 for Demi Ultra. Using both lamps, the lowest VMH-values were shown by SDR, while the highest values by SonicFill2. At t24, all DC and VMH values significantly increased. Conclusions: Differences in DC and VMH among materials are suggested to be material and curing lamp dependent. Even at t0, the three high viscosity bulk composites showed higher VMH than the flowable or dual curing composites.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.