ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Peyre, Hugo; Leplège, Alain; Coste, Joël
2011-03-01
Missing items are common in quality of life (QoL) questionnaires and present a challenge for research in this field. It remains unclear which of the various methods proposed to deal with missing data performs best in this context. We compared personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques using various realistic simulation scenarios of item missingness in QoL questionnaires constructed within the framework of classical test theory. Samples of 300 and 1,000 subjects were randomly drawn from the 2003 INSEE Decennial Health Survey (of 23,018 subjects representative of the French population and having completed the SF-36) and various patterns of missing data were generated according to three different item non-response rates (3, 6, and 9%) and three types of missing data (Little and Rubin's "missing completely at random," "missing at random," and "missing not at random"). The missing data methods were evaluated in terms of accuracy and precision for the analysis of one descriptive and one association parameter for three different scales of the SF-36. For all item non-response rates and types of missing data, multiple imputation and full information maximum likelihood appeared superior to the personal mean score and especially to hot deck in terms of accuracy and precision; however, the use of personal mean score was associated with insignificant bias (relative bias <2%) in all studied situations. Whereas multiple imputation and full information maximum likelihood are confirmed as reference methods, the personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
76 FR 56141 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
Universally-Usable Interactive Electronic Physics Instructional And Educational Materials
NASA Astrophysics Data System (ADS)
Gardner, John
2000-03-01
Recent developments of technologies that promote full accessibility of electronic information by future generations of people with print disabilities will be described. ("Print disabilities" include low vision, blindness, and dyslexia.) The guiding philosophy of these developments is that information should be created and transmitted in a form that is as display-independent as possible, and that the user should have maximum freedom over how that information is to be displayed. This philosophy leads to maximum usability by everybody and is, in the long run, the only way to assure truly equal access. Research efforts to be described include access to mathematics and scientific notation and to graphs, tables, charts, diagrams, and general object-oriented graphics.
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2012-01-01
Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…
ERIC Educational Resources Information Center
Acock, Alan C.
2005-01-01
Less than optimum strategies for missing values can produce biased estimates, distorted statistical power, and invalid conclusions. After reviewing traditional approaches (listwise, pairwise, and mean substitution), selected alternatives are covered including single imputation, multiple imputation, and full information maximum likelihood…
Best practices for missing data management in counseling psychology.
Schlomer, Gabriel L; Bauman, Sheri; Card, Noel A
2010-01-01
This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation.
ERIC Educational Resources Information Center
Ferguson, Anthony W.
2000-01-01
Discusses new ways of selecting information for digital libraries. Topics include increasing the quantity of information acquired versus item by item selection that is more costly than the value it adds; library-publisher relationships; netLibrary; electronic journals; and the SPARC (Scholarly Publishing and Academic Resources Coalition)…
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Missing Data Imputation versus Full Information Maximum Likelihood with Second-Level Dependencies
ERIC Educational Resources Information Center
Larsen, Ross
2011-01-01
Missing data in the presence of upper level dependencies in multilevel models have never been thoroughly examined. Whereas first-level subjects are independent over time, the second-level subjects might exhibit nonzero covariances over time. This study compares 2 missing data techniques in the presence of a second-level dependency: multiple…
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models
ERIC Educational Resources Information Center
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent
2015-01-01
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
Stimulus-dependent Maximum Entropy Models of Neural Population Codes
Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
ERIC Educational Resources Information Center
Enders, Craig K.
2008-01-01
Recent missing data studies have argued in favor of an "inclusive analytic strategy" that incorporates auxiliary variables into the estimation routine, and Graham (2003) outlined methods for incorporating auxiliary variables into structural equation analyses. In practice, the auxiliary variables often have missing values, so it is reasonable to…
Generalized Full-Information Item Bifactor Analysis
Cai, Li; Yang, Ji Seung; Hansen, Mark
2011-01-01
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than one group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker’s (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood only requires two-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy. PMID:21534682
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei
The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less
The effect of prenatal care on birthweight: a full-information maximum likelihood approach.
Rous, Jeffrey J; Jewell, R Todd; Brown, Robert W
2004-03-01
This paper uses a full-information maximum likelihood estimation procedure, the Discrete Factor Method, to estimate the relationship between birthweight and prenatal care. This technique controls for the potential biases surrounding both the sample selection of the pregnancy-resolution decision and the endogeneity of prenatal care. In addition, we use the actual number of prenatal care visits; other studies have normally measured prenatal care as the month care is initiated. We estimate a birthweight production function using 1993 data from the US state of Texas. The results underscore the importance of correcting for estimation problems. Specifically, a model that does not control for sample selection and endogeneity overestimates the benefit of an additional visit for women who have relatively few visits. This overestimation may indicate 'positive fetal selection,' i.e., women who did not abort may have healthier babies. Also, a model that does not control for self-selection and endogenity predicts that past 17 visits, an additional visit leads to lower birthweight, while a model that corrects for these estimation problems predicts a positive effect for additional visits. This result shows the effect of mothers with less healthy fetuses making more prenatal care visits, known as 'adverse selection' in prenatal care. Copyright 2003 John Wiley & Sons, Ltd.
Statistical mechanics of letters in words
Stephens, Greg J.; Bialek, William
2013-01-01
We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ~92% of the multi-information in four-letter words and even “discovering” words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ~68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon. PMID:20866490
Mapping hurricane rita inland storm tide
Berenbrock, C.; Mason, R.R.; Blanchard, S.F.
2009-01-01
Flood-inundation data are most useful for decision makers when presented in the context of maps of affected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-h intervals from midnight (00:00 hours) through noon (12:00 hours) on 24 September 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared with the extent of flood inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks. ?? 2009 Blackwell Publishing Ltd.
Mapping Hurricane Rita inland storm tide
Berenbrock, Charles; Mason, Jr., Robert R.; Blanchard, Stephen F.; Simonovic, Slobodan P.
2009-01-01
Flood-inundation data are most useful for decision makers when presented in the context of maps of effected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems (GIS) provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-hour intervals from midnight (0000 hour) through noon (1200 hour) on September 24, 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared to the extent of flood-inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks.
Evolution of cooperation and skew under imperfect information
Akçay, Erol; Meirowitz, Adam; Ramsay, Kristopher W.; Levin, Simon A.
2012-01-01
The evolution of cooperation in nature and human societies depends crucially on how the benefits from cooperation are divided and whether individuals have complete information about their payoffs. We tackle these questions by adopting a methodology from economics called mechanism design. Focusing on reproductive skew as a case study, we show that full cooperation may not be achievable due to private information over individuals’ outside options, regardless of the details of the specific biological or social interaction. Further, we consider how the structure of the interaction can evolve to promote the maximum amount of cooperation in the face of the informational constraints. Our results point to a distinct avenue for investigating how cooperation can evolve when the division of benefits is flexible and individuals have private information. PMID:22908269
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
Kinetically accessible yield (KAY) for redirection of metabolism to produce exo-metabolites
Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei; ...
2017-04-05
The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less
Tatari, K; Smets, B F; Albrechtsen, H-J
2013-10-15
A bench-scale assay was developed to obtain site-specific nitrification biokinetic information from biological rapid sand filters employed in groundwater treatment. The experimental set-up uses granular material subsampled from a full-scale filter, packed in a column, and operated with controlled and continuous hydraulic and ammonium loading. Flowrates and flow recirculation around the column are chosen to mimic full-scale hydrodynamic conditions, and minimize axial gradients. A reference ammonium loading rate is calculated based on the average loading experienced in the active zone of the full-scale filter. Effluent concentrations of ammonium are analyzed when the bench-scale column is subject to reference loading, from which removal rates are calculated. Subsequently, removal rates above the reference loading are measured by imposing short-term loading variations. A critical loading rate corresponding to the maximum removal rate can be inferred. The assay was successfully applied to characterize biokinetic behavior from a test rapid sand filter; removal rates at reference loading matched those observed from full-scale observations, while a maximum removal capacity of 6.9 g NH4(+)-N/m(3) packed sand/h could easily be determined at 7.5 g NH4(+)-N/m(3) packed sand/h. This assay, with conditions reflecting full-scale observations, and where the biological activity is subject to minimal physical disturbance, provides a simple and fast, yet powerful tool to gain insight in nitrification kinetics in rapid sand filters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kirby, James B.; Bollen, Kenneth A.
2009-01-01
Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054
Kaufmann, Anton
2010-07-30
Elemental compositions (ECs) can be elucidated by evaluating the high-resolution mass spectra of unknown or suspected unfragmented analyte ions. Classical approaches utilize the exact mass of the monoisotopic peak (M + 0) and the relative abundance of isotope peaks (M + 1 and M + 2). The availability of high-resolution instruments like the Orbitrap currently permits mass resolutions up to 100,000 full width at half maximum. This not only allows the determination of relative isotopic abundances (RIAs), but also the extraction of other diagnostic information from the spectra, such as fully resolved signals originating from (34)S isotopes and fully or partially resolved signals related to (15)N isotopes (isotopic fine structure). Fully and partially resolved peaks can be evaluated by visual inspection of the measured peak profiles. This approach is shown to be capable of correctly discarding many of the EC candidates which were proposed by commercial EC calculating algorithms. Using this intuitive strategy significantly extends the upper mass range for the successful elucidation of ECs. Copyright 2010 John Wiley & Sons, Ltd.
Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
Tire-to-Surface Friction-Coefficient Measurements with a C-123B Airplane on Various Runway Surfaces
NASA Technical Reports Server (NTRS)
Sawyer, Richard H.; Kolnick, Joseph J.
1959-01-01
An investigation was conducted to obtain information on the tire-to-surface friction coefficients available in aircraft braking during the landing run. The tests were made with a C-123B airplane on both wet and dry concrete and bituminous pavements and on snow-covered and ice surfaces at speeds from 12 to 115 knots. Measurements were made of the maximum (incipient skidding) friction coefficient, the full-skidding (locked wheel) friction coefficient, and the wheel slip ratio during braking.
Full-Envelope Launch Abort System Performance Analysis Methodology
NASA Technical Reports Server (NTRS)
Aubuchon, Vanessa V.
2014-01-01
The implementation of a new dispersion methodology is described, which dis-perses abort initiation altitude or time along with all other Launch Abort System (LAS) parameters during Monte Carlo simulations. In contrast, the standard methodology assumes that an abort initiation condition is held constant (e.g., aborts initiated at altitude for Mach 1, altitude for maximum dynamic pressure, etc.) while dispersing other LAS parameters. The standard method results in large gaps in performance information due to the discrete nature of initiation conditions, while the full-envelope dispersion method provides a significantly more comprehensive assessment of LAS abort performance for the full launch vehicle ascent flight envelope and identifies performance "pinch-points" that may occur at flight conditions outside of those contained in the discrete set. The new method has significantly increased the fidelity of LAS abort simulations and confidence in the results.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... policy for information reported on fuel ethanol production capacity, (both nameplate and maximum... fuel ethanol production capacity, (both nameplate and maximum sustainable capacity) on Form EIA-819 as... treat all information reported on fuel ethanol production capacity, (both nameplate and maximum...
Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)
Meyer, Karin
2008-01-01
Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112
Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor; Essex, M
2015-05-01
To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice.
Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor
2015-01-01
Abstract To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice. PMID:25560745
Selecting band combinations with thematic mapper data
NASA Technical Reports Server (NTRS)
Sheffield, C. A.
1983-01-01
A problem arises in making color composite images because there are 210 different possible color presentations of TM three-band images. A method is given for reducing that 210 to a single choice, decided by the statistics of a scene or subscene, and taking into full account any correlations that exist between different bands. Instead of using total variance as the measure for information content of the band triplets, the ellipsoid of maximum volume is selected which discourages selection of bands with high correlation. The band triplet is obtained by computing and ranking in order the determinants of each 3 x 3 principal submatrix of the original matrix M. After selection of the best triplet, the assignment of colors is made by using the actual variances (the diagonal elements of M): green (maximum variance), red (second largest variance), blue (smallest variance).
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
Comparative Flight and Full-Scale Wind-Tunnel Measurements of the Maximum Lift of an Airplane
NASA Technical Reports Server (NTRS)
Silverstein, Abe; Katzoff, S; Hootman, James A
1938-01-01
Determinations of the power-off maximum lift of a Fairchild 22 airplane were made in the NACA full-scale wind tunnel and in flight. The results from the two types of test were in satisfactory agreement. It was found that, when the airplane was rotated positively in pitch through the angle of stall at rates of the order of 0.1 degree per second, the maximum lift coefficient was considerably higher than that obtained in the standard tests, in which the forces are measured with the angles of attack fixed. Scale effect on the maximum lift coefficient was also investigated.
34 CFR 682.204 - Maximum loan amounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Maximum loan amounts. 682.204 Section 682.204 Education..., DEPARTMENT OF EDUCATION FEDERAL FAMILY EDUCATION LOAN (FFEL) PROGRAM General Provisions § 682.204 Maximum... a full academic year, the maximum annual amount that the student may receive may not exceed the...
Effect of density feedback on the two-route traffic scenario with bottleneck
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua
2016-12-01
In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.
Learning monopolies with delayed feedback on price expectations
NASA Astrophysics Data System (ADS)
Matsumoto, Akio; Szidarovszky, Ferenc
2015-11-01
We call the intercept of the price function with the vertical axis the maximum price and the slope of the price function the marginal price. In this paper it is assumed that a monopolistic firm has full information about the marginal price and its own cost function but is uncertain on the maximum price. However, by repeated interaction with the market, the obtained price observations give a basis for an adaptive learning process of the maximum price. It is also assumed that the price observations have fixed delays, so the learning process can be described by a delayed differential equation. In the cases of one or two delays, the asymptotic behavior of the resulting dynamic process is examined, stability conditions are derived. Three main results are demonstrated in the two delay learning processes. First, it is possible to stabilize the equilibrium which is unstable in the one delay model. Second, complex dynamics involving chaos, which is impossible in the one delay model, can emerge. Third, alternations of stability and instability (i.e., stability switches) occur repeatedly.
Papi, Ahmad; Ghazavi, Roghayeh; Moradi, Salimeh
2015-01-01
Understanding of the medical society's from the types of information resources for quick and easy access to information is an imperative task in medical researches and management of the treatment. The present study was aimed to determine the level of awareness of the physicians in using various electronic information resources and the factors affecting it. This study was a descriptive survey. The data collection tool was a researcher-made questionnaire. The study population included all the physicians and specialty physicians of the teaching hospitals affiliated to Isfahan University of Medical Sciences and numbered 350. The sample size based on Morgan's formula was set at 180. The content validity of the tool was confirmed by the library and information professionals and the reliability was 95%. Descriptive statistics were used including the SPSS software version 19. On reviewing the need of the physicians to obtain the information on several occasions, the need for information in conducting the researches was reported by the maximum number of physicians (91.9%) and the usage of information resources, especially the electronic resources, formed 65.4% as the highest rate with regard to meeting the information needs of the physicians. Among the electronic information databases, the maximum awareness was related to Medline with 86.5%. Among the various electronic information resources, the highest awareness (43.3%) was related to the E-journals. The highest usage (36%) was also from the same source. The studied physicians considered the most effective deterrent in the use of electronic information resources as being too busy and lack of time. Despite the importance of electronic information resources for the physician's community, there was no comprehensive knowledge of these resources. This can lead to less usage of these resources. Therefore, careful planning is necessary in the hospital libraries in order to introduce the facilities and full capabilities of the mentioned resources and methods of information retrieval.
MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information
NASA Astrophysics Data System (ADS)
Zhang, Yagang; Wang, Zengping
2015-02-01
In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
Modelling information flow along the human connectome using maximum flow.
Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung
2018-01-01
The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
NASA Astrophysics Data System (ADS)
Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina
2016-05-01
Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
NASA Technical Reports Server (NTRS)
Hunton, Lynn W.; Dew, Joseph K.
1948-01-01
Wind-tunnel tests of a full-scale model of the Republic XP-91 airplane were conducted to determine the longitudinal and lateral characteristics of the wing alone and the wing-fuselage combination, the characteristics of the aileron, and the damping in roll af the wing alone. Various high-lift devices were investigated including trailing-edge split flaps and partial- and full-span leading-edge slats and Krueger-type nose flaps. Results of this investigation showed that a very significant gain in maximum lift could be achieved through use of the proper leading-edge device, The maximum lift coefficient of the model with split flaps and the original partial-span straight slats was only 1.2; whereas a value of approximately 1.8 was obtained by drooping the slat and extending it full span, Improvement in maximum lift of approximately the same amount resulted when a full-span nose flap was substituted for the original partial-span slat.
Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Weaver, Jesse R.
In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less
An ERTS-1 investigation for Lake Ontario and its basin
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.
1975-01-01
The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.
NASA Astrophysics Data System (ADS)
Moon, Geon Dae; Joo, Ji Bong; Yin, Yadong
2013-11-01
A simple layer-by-layer approach has been developed for constructing 2D planar supercapacitors of multi-stacked reduced graphene oxide and carbon nanotubes. This sandwiched 2D architecture enables the full utilization of the maximum active surface area of rGO nanosheets by using a CNT layer as a porous physical spacer to enhance the permeation of a gel electrolyte inside the structure and reduce the agglomeration of rGO nanosheets along the vertical direction. As a result, the stacked multilayers of rGO and CNTs are capable of offering higher output voltage and current production.A simple layer-by-layer approach has been developed for constructing 2D planar supercapacitors of multi-stacked reduced graphene oxide and carbon nanotubes. This sandwiched 2D architecture enables the full utilization of the maximum active surface area of rGO nanosheets by using a CNT layer as a porous physical spacer to enhance the permeation of a gel electrolyte inside the structure and reduce the agglomeration of rGO nanosheets along the vertical direction. As a result, the stacked multilayers of rGO and CNTs are capable of offering higher output voltage and current production. Electronic supplementary information (ESI) available: Experimental details, SEM and TEM images and additional electrochemical data. See DOI: 10.1039/c3nr04339h
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
NASA Astrophysics Data System (ADS)
Ma, L.; Zhou, M.; Li, C.
2017-09-01
In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.
Measurement of laser spot quality
NASA Technical Reports Server (NTRS)
Milster, T. D.; Treptau, J. P.
1991-01-01
Several ways of measuring spot quality are compared. We examine in detail various figures of merit such as full width at half maximum (FWHM), full width at 1/(e exp 2) maximum, Strehl ratio, and encircled energy. Our application is optical data storage, but results can be applied to other areas like space communications and high energy lasers. We found that the optimum figure of merit in many cases is Strehl ratio.
Correlation Tests of the Ditching Behavior of an Army B-24D Airplane and a 1/16-size Model
NASA Technical Reports Server (NTRS)
Jarvis, George A.; Fisher, Lloyd J.
1946-01-01
Behaviors of both model and full-scale airplanes were ascertained by making visual observations, by recording time histories of decelerations, and by taking motion picture records of ditchings. Results are presented in form of sequence photographs and time-history curves for attitudes, vertical and horizontal displacements, and longitudinal decelerations. Time-history curves for attitudes and horizontal and vertical displacements for model and full-scale tests were in agreement; maximum longitudinal decelerations for both ditchings did not occur at same part of run; full-scale maximum deceleration was 50 percent greater.
A fresh look at the Last Glacial Maximum using Paleoclimate Data Assimilation
NASA Astrophysics Data System (ADS)
Malevich, S. B.; Tierney, J. E.; Hakim, G. J.; Tardif, R.
2017-12-01
Quantifying climate conditions during the Last Glacial Maximum ( 21ka) can help us to understand climate responses to forcing and climate states that are poorly represented in the instrumental record. Paleoclimate proxies may be used to estimate these climate conditions, but proxies are sparsely distributed and possess uncertainties from environmental and biogeochemical processes. Alternatively, climate model simulations provide a full-field view, but may predict unrealistic climate states or states not faithful to proxy records. Here, we use data assimilation - combining climate proxy records with a theoretical understanding from climate models - to produce field reconstructions of the LGM that leverage the information from both data and models. To date, data assimilation has mainly been used to produce reconstructions of climate fields through the last millennium. We expand this approach in order to produce a climate fields for the Last Glacial Maximum using an ensemble Kalman filter assimilation. Ensemble samples were formed from output from multiple models including CCSM3, CESM2.1, and HadCM3. These model simulations are combined with marine sediment proxies for upper ocean temperature (TEX86, UK'37, Mg/Ca and δ18O of foraminifera), utilizing forward models based on a newly developed suite of Bayesian proxy system models. We also incorporate age model and radiocarbon reservoir uncertainty into our reconstructions using Bayesian age modeling software. The resulting fields show familiar patterns based on comparison with previous proxy-based reconstructions, but additionally reveal novel patterns of large-scale shifts in ocean-atmosphere dynamics, as the surface temperature data inform upon atmospheric circulation and precipitation patterns.
How much a quantum measurement is informative?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia
2014-12-04
The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.
Pourcain, Beate St.; Smith, George Davey; York, Timothy P.; Evans, David M.
2014-01-01
Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ∼4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered. PMID:25060210
Algorithms and Complexity Results for Genome Mapping Problems.
Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric
2017-01-01
Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.
The Southern Glacial Maximum 65,000 years ago and its Unfinished Termination
NASA Astrophysics Data System (ADS)
Schaefer, Joerg M.; Putnam, Aaron E.; Denton, George H.; Kaplan, Michael R.; Birkel, Sean; Doughty, Alice M.; Kelley, Sam; Barrell, David J. A.; Finkel, Robert C.; Winckler, Gisela; Anderson, Robert F.; Ninneman, Ulysses S.; Barker, Stephen; Schwartz, Roseanne; Andersen, Bjorn G.; Schluechter, Christian
2015-04-01
Glacial maxima and their terminations provide key insights into inter-hemispheric climate dynamics and the coupling of atmosphere, surface and deep ocean, hydrology, and cryosphere, which is fundamental for evaluating the robustness of earth's climate in view of ongoing climate change. The Last Glacial Maximum (LGM, ∼26-19 ka ago) is widely seen as the global cold peak during the last glacial cycle, and its transition to the Holocene interglacial, dubbed 'Termination 1 (T1)', as the most dramatic climate reorganization during this interval. Climate records show that over the last 800 ka, ice ages peaked and terminated on average every 100 ka ('100 ka world'). However, the mechanisms pacing glacial-interglacial transitions remain controversial and in particular the hemispheric manifestations and underlying orbital to regional driving forces of glacial maxima and subsequent terminations remain poorly understood. Here we show evidence for a full glacial maximum in the Southern Hemisphere 65.1 ± 2.7 ka ago and its 'Unfinished Termination'. Our 10Be chronology combined with a model simulation demonstrates that New Zealand's glaciers reached their maximum position of the last glacial cycle during Marine Isotope Stage-4 (MIS-4). Southern ocean and greenhouse gas records indicate coeval peak glacial conditions, making the case for the Southern Glacial Maximum about halfway through the last glacial cycle and only 15 ka after the last warm period (MIS-5a). We present the hypothesis that subsequently, driven by boreal summer insolation forcing, a termination began but remained unfinished, possibly because the northern ice sheets were only moderately large and could not supply enough meltwater to the North Atlantic through Heinrich Stadial 6 to drive a full termination. Yet the Unfinished Termination left behind substantial ice on the northern continents (about 50% of the full LGM ice volume) and after another 45 ka of cooling and ice sheet growth the earth was at inter-hemispheric Last Glacial Maximum configuration, when similar orbital forcing hit maximum-size northern ice sheets and ushered in T1 and thus the ongoing interglacial. This argument highlights the critical role of full glacial conditions in both hemispheres for terminations and implies that the Southern Hemisphere climate could transition from interglacial to full glacial conditions in about 15,000 years, while the Northern Hemisphere and its continental ice-sheets required half a glacial cycle.
Optimal tuning of a confined Brownian information engine.
Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong
2016-03-01
A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulakhmetov, Marat, E-mail: mkulakhm@purdue.edu; Alexeenko, Alina, E-mail: alexeenk@purdue.edu; Gallis, Michael, E-mail: magalli@sandia.gov
Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O{sub 2} + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociationmore » exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500–20 000 K temperature range. The VRT model captures 80 × 10{sup 6} state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000–15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.« less
2017-08-21
distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Target-matched insertion gain derived from three different hearing aid selection procedures.
Punch, J L; Shovels, A H; Dickinson, W W; Calder, J H; Snead, C
1995-11-01
Three hearing aid selection procedures were compared to determine if any one was superior in producing prescribed real-ear insertion gain. For each of three subject groups, 12 in-the-ear style hearing aids with Class D circuitry and similar dispenser controls were ordered from one of three manufacturers. Subject groups were classified based on the type of information included on the hearing aid order form: (1) the subject's audiogram, (2) a three-part matrix specifying the desired maximum output, full-on gain, and frequency response slope of the hearing aid, or (3) the desired 2-cc coupler full-in grain of the hearing aid, based on real-ear coupler difference (RECD) measurements. Following electroacoustic adjustments aimed at approximating a commonly used target insertion gain formula, results revealed no significant differences among any of the three selection procedures with respect to obtaining acceptable insertion gain values.
14 CFR 23.1527 - Maximum operating altitude.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established for...
14 CFR 23.1524 - Maximum passenger seating configuration.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23.1524 Section 23.1524 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum...
Maximum Mass-Particle Velocities in Kantor's Information Mechanics
NASA Astrophysics Data System (ADS)
Sverdlik, Daniel I.
1989-02-01
Kantor's information mechanics links phenomena previously regarded as not treatable by a single theory. It is used here to calculate the maximum velocities ν m of single particles. For the electron, ν m/c≈1-1.253 814×10-77. The maximum ν m corresponds to ν m/c≈1-1.097864×10-122 for a single mass particle with a rest mass of 3.078 496×10-5g. This is the fastest that matter can move. Either information mechanics or classical mechanics can be used to show that ν m is less for heavier particles. That ν m is less for lighter particles can be deduced from an information mechanics argument alone.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... 20-63; Cover Letter Giving Information About the Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About the Cost To Elect the Maximum Survivor Annuity, RI... other Federal agencies the opportunity to comment on a revised information collection request (ICR 3206...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-18
.... This letter may be used to ask for more information. Analysis Agency: Retirement Operations, Retirement... 20-63; Cover Letter Giving Information About The Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About The Cost To Elect the Maximum Survivor Annuity, RI...
Code of Federal Regulations, 2010 CFR
2010-07-01
... current and include enough qualified sources to ensure maximum open and free competition. Recipients must... transactions in a manner providing maximum full and open competition. (a) Restrictions on competition... bonding requirements; (3) Noncompetitive pricing practices between firms or between affiliated companies...
Full polarimetric millimetre wave radar for stand-off security screening
NASA Astrophysics Data System (ADS)
Blackhurst, Eddie; Salmon, Neil; Southgate, Matthew
2017-10-01
The development and measurements are described of a frequency modulated continuous wave (FMCW) mono-static millimetre wave full polarimetric radar, operating at k-band (18 to 26 GHz). The system has been designed to explore the feasibility of using full polarimetry for the detection of concealed weapons, and person borne improvised explosive devices (PBIED). The philosophy of this scheme is a means to extract the maximum information content from a target which is normally in the single spatial pixel (sometimes sub-pixel) configuration in stand-off (tens of metres) and crowd surveillance scenarios. The radar comprises a vector network analyser (VNA), an orthomode transducer and a conical horn antenna. A calibration strategy is discussed and demonstrated using a variety of known calibration targets with known reflective properties, including a flat metal plate, dihedral reflector, metal sphere, helix and dipole. The orthomode transducer is based on a high performance linear polarizer of the turnstile type with isolation better than - 35dB between orthogonal polarisations. The calibration enables the polarimetric Sinclair scattering matrix to be measured at each frequency for coherent polarimetry, and this can be extended using multiple measurements via the Kennaugh matrix to investigate incoherent full polarimetry.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Reconfigurable Control Design for the Full X-33 Flight Envelope
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Burken, John J.
2001-01-01
A reconfigurable control law for the full X-33 flight envelope has been designed to accommodate a failed control surface and redistribute the control effort among the remaining working surfaces to retain satisfactory stability and performance. An offline nonlinear constrained optimization approach has been used for the X-33 reconfigurable control design method. Using a nonlinear, six-degree-of-freedom simulation, three example failures are evaluated: ascent with a left body flap jammed at maximum deflection; entry with a right inboard elevon jammed at maximum deflection; and landing with a left rudder jammed at maximum deflection. Failure detection and identification are accomplished in the actuator controller. Failure response comparisons between the nominal control mixer and the reconfigurable control subsystem (mixer) show the benefits of reconfiguration. Single aerosurface jamming failures are considered. The cases evaluated are representative of the study conducted to prove the adequate and safe performance of the reconfigurable control mixer throughout the full flight envelope. The X-33 flight control system incorporates reconfigurable flight control in the existing baseline system.
NASA Astrophysics Data System (ADS)
Li, X.; Sang, Y. F.
2017-12-01
Mountain torrents, urban floods and other disasters caused by extreme precipitation bring great losses to the ecological environment, social and economic development, people's lives and property security. So there is of great significance to floods prevention and control by the study of its spatial distribution. Based on the annual maximum rainfall data of 60min, 6h and 24h, the paper generate long sequences following Pearson-III distribution, and then use the information entropy index to study the spatial distribution and difference of different duration. The results show that the information entropy value of annual maximum rainfall in the south region is greater than that in the north region, indicating more obvious stochastic characteristics of annual maximum rainfall in the latter. However, the spatial distribution of stochastic characteristics is different in different duration. For example, stochastic characteristics of 60min annual maximum rainfall in the Eastern Tibet is smaller than surrounding, but 6h and 24h annual maximum rainfall is larger than surrounding area. In the Haihe River Basin and the Huaihe River Basin, the stochastic characteristics of the 60min annual maximum rainfall was not significantly different from that in the surrounding area, and stochastic characteristics of 6h and 24h was smaller than that in the surrounding area. We conclude that the spatial distribution of information entropy values of annual maximum rainfall in different duration can reflect the spatial distribution of its stochastic characteristics, thus the results can be an importantly scientific basis for the flood prevention and control, agriculture, economic-social developments and urban flood control and waterlogging.
Wetzel, Stephan G; Cha, Soonmee; Law, Meng; Johnson, Glyn; Golfinos, John; Lee, Peter; Nelson, Peter Kim
2002-01-01
In evaluating intracranial tumors, a safe low-cost alternative that provides information similar to that of digital subtraction angiography (DSA) may be of interest. Our purpose was to determine the utility and limitations of a combined MR protocol in assessing (neo-) vascularity in intracranial tumors and their relation to adjacent vessels and to compare the results with those of DSA. Twenty-two consecutive patients with an intracranial tumor who underwent preoperative stereoscopic DSA were examined with contrast-enhanced dynamic T2*-weighted perfusion MR imaging followed by a T1-weighted three-dimensional (3D) MR study (volumetric interpolated brain examination [VIBE]). The maximum relative cerebral blood volume (rCBV) of the tumor was compared with tumor vascularity at DSA. Critical vessel structures were defined in each patient, and VIBE images of these structures were compared with DSA findings. For full exploitation of the 3D data sets, maximum-intensity projection algorithms reconstructed in real time with any desired volume and orientation were used. Tumor blush scores at DSA were significantly correlated with the rCBV measurements (r = 0.75; P <.01, Spearman rank correlation coefficient). In 17 (77%) patients, VIBE provided all relevant information about the venous system, whereas information about critical arteries were partial in 50% of the cases and not relevant in the other 50%. A fast imaging protocol consisting of perfusion MR imaging and a volumetric MR acquisition provides some of the information about tumor (neo-) vascularity and adjacent vascular anatomy that can be obtained with conventional angiography. However, the MR protocol provides insufficient visualization of distal cerebral arteries.
All Entangled States can Demonstrate Nonclassical Teleportation.
Cavalcanti, Daniel; Skrzypczyk, Paul; Šupić, Ivan
2017-09-15
Quantum teleportation, the process by which Alice can transfer an unknown quantum state to Bob by using preshared entanglement and classical communication, is one of the cornerstones of quantum information. The standard benchmark for certifying quantum teleportation consists in surpassing the maximum average fidelity between the teleported and the target states that can be achieved classically. According to this figure of merit, not all entangled states are useful for teleportation. Here we propose a new benchmark that uses the full information available in a teleportation experiment and prove that all entangled states can implement a quantum channel which cannot be reproduced classically. We introduce the idea of nonclassical teleportation witness to certify if a teleportation experiment is genuinely quantum and discuss how to quantify this phenomenon. Our work provides new techniques for studying teleportation that can be immediately applied to certify the quality of quantum technologies.
An autosomal genetic linkage map of the sheep genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, A.M.; Ede, A.J.; Pierson, C.A.
1995-06-01
We report the first extensive ovine genetic linkage map covering 2070 cM of the sheep genome. The map was generated from the linkage analysis of 246 polymorphic markers, in nine three-generation full-sib pedigrees, which make up the AgResearch International Mapping Flock. We have exploited many markers from cattle so that valuable comparisons between these two ruminant linkage maps can be made. The markers, used in the segregation analyses, comprised 86 anonymous microsatellite markers derived from the sheep genome, 126 anonymous microsatellites from cattle, one from deer, and 33 polymorphic markers of various types associated with known genes. The maximum numbermore » of informative meioses within the mapping flock was 22. The average number of informative meioses per marker was 140 (range 18-209). Linkage groups have been assigned to all 26 sheep autosomes. 102 refs., 8 figs., 5 tabs.« less
A high-performance gradient insert for rapid and short-T2 imaging at full duty cycle.
Weiger, Markus; Overweg, Johan; Rösler, Manuela Barbara; Froidevaux, Romain; Hennel, Franciszek; Wilm, Bertram Jakob; Penn, Alexander; Sturzenegger, Urs; Schuth, Wout; Mathlener, Menno; Borgo, Martino; Börnert, Peter; Leussler, Christoph; Luechinger, Roger; Dietrich, Benjamin Emanuel; Reber, Jonas; Brunner, David Otto; Schmid, Thomas; Vionnet, Laetitia; Pruessmann, Klaas P
2018-06-01
The goal of this study was to devise a gradient system for MRI in humans that reconciles cutting-edge gradient strength with rapid switching and brings up the duty cycle to 100% at full continuous amplitude. Aiming to advance neuroimaging and short-T 2 techniques, the hardware design focused on the head and the extremities as target anatomies. A boundary element method with minimization of power dissipation and stored magnetic energy was used to design anatomy-targeted gradient coils with maximally relaxed geometry constraints. The design relies on hollow conductors for high-performance cooling and split coils to enable dual-mode gradient amplifier operation. With this approach, strength and slew rate specifications of either 100 mT/m with 1200 mT/m/ms or 200 mT/m with 600 mT/m/ms were reached at 100% duty cycle, assuming a standard gradient amplifier and cooling unit. After manufacturing, the specified values for maximum gradient strength, maximum switching rate, and field geometry were verified experimentally. In temperature measurements, maximum local values of 63°C were observed, confirming that the device can be operated continuously at full amplitude. Testing for peripheral nerve stimulation showed nearly unrestricted applicability in humans at full gradient performance. In measurements of acoustic noise, a maximum average sound pressure level of 132 dB(A) was determined. In vivo capability was demonstrated by head and knee imaging. Full gradient performance was employed with echo planar and zero echo time readouts. Combining extreme gradient strength and switching speed without duty cycle limitations, the described system offers unprecedented options for rapid and short-T 2 imaging. Magn Reson Med 79:3256-3266, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*
Karaivanov, Alexander; Townsend, Robert M.
2014-01-01
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710
An automatic data system for vibration modal tuning and evaluation
NASA Technical Reports Server (NTRS)
Salyer, R. A.; Jung, E. J., Jr.; Huggins, S. L.; Stephens, B. L.
1975-01-01
A digitally based automatic modal tuning and analysis system developed to provide an operational capability beginning at 0.1 hertz is described. The elements of the system, which provides unique control features, maximum operator visibility, and rapid data reduction and documentation, are briefly described; and the operational flow is discussed to illustrate the full range of capabilities and the flexibility of application. The successful application of the system to a modal survey of the Skylab payload is described. Information about the Skylab test article, coincident-quadrature analysis of modal response data, orthogonality, and damping calculations is included in the appendixes. Recommendations for future application of the system are also made.
75 FR 16821 - Housing Finance Agency Risk-Sharing Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
...The proposed information collection requirement described below has been submitted to the Office of Management and Budget (OMB) for review, as required by the Paperwork Reduction Act. The Department is soliciting public comments on the subject proposal. Section 542(c) of the Risk Sharing Program authorizes qualified Housing Finance Agencies (HFAs) to underwrite and process loans. HUD provides full mortgage insurance on affordable multifamily housing project processed by HFAs under this program. Qualified HFAs are vested with the maximum amount of processing responsibilities. By entering into Risk-Sharing Agreement with HUD, HFAs contract to reimburse HUD for a portion of the loss from any defaults that occur while HUD insurance is in force.
Determination of the maximum MGS mounting height : phase II detailed analysis with LS-DYNA.
DOT National Transportation Integrated Search
2012-12-01
Determination of the maximum Midwest Guardrail System (MGS) mounting height was performed in two phases. : Phase I concentrated on crash testing: two full-scale crash tests were performed on the MGS with top-rail mounting heights : of 34 in. (864 mm)...
Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.
Frieden, B Roy; Gatenby, Robert A
2011-01-01
Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.
NASA Astrophysics Data System (ADS)
1999-11-01
The School of Education at King's College London can now offer funded studentships to those wishing to undertake research in science education. These studentships, which are funded through the generous benefaction of the late Rosalind Driver, can be for a full-time student (over a maximum of three years) or several part-time students (a maximum of six years). Applications from anyone working in science education are welcome but preference will be given to those originating from practising science teachers. Applicants will be expected to register for the award of a MPhil/PhD or EdD and are normally expected to have a first degree. Preliminary ideas about a topic for investigation would also be helpful. Further details and application forms are obtainable from Chiz Dube, School of Education, King's College London, Franklin - Wilkins Building, Waterloo Road, London SE1 8WA (tel: 020-7848-3089, e-mail: chiz.dube@kcl.ac.uk). The deadline for the first round of applications was the middle of October, but preliminary informal enquiries may be made to Dr Jonathan Osborne at the School of Education (tel: 020-7848-3094, e-mail: jonathan.osborne@kcl.ac.uk).
Design and validation of an aircraft seat comfort scale using item response theory.
Menegon, Lizandra da Silva; Vincenzi, Silvana Ligia; de Andrade, Dalton Francisco; Barbetta, Pedro Alberto; Merino, Eugenio Andrés Díaz; Vink, Peter
2017-07-01
This article aims to evaluate the psychometric properties of a scale that measures aircraft seat comfort. Factor analysis was used to study data variances. Psychometric quality was checked by using Item Response Theory. The sample consisted of 1500 passengers who completed a questionnaire at a Brazilian airport. Full information factor analysis showed the presence of one dominant factor explaining 34% of data variance. The scale generated covered all levels of comfort data, from 'no comfort' to 'maximum comfort'. The results show that the passengers consider there is comfort, but this is very minimal when these passengers have to perform their desired activities. It tends to increase when aspects of the aircraft seating are improved and positive emotions are elicited. Comfort peaks when pleasure is experienced and passenger expectations are exceeded (maximum comfort). This outcome seems consistent with the literature. Further research is advised to compare the outcome of this questionnaire with other research methods, and to check if the questionnaire is sensitive enough and whether its conclusions are useful in practice. Copyright © 2017. Published by Elsevier Ltd.
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... maximum advertised speed, technology type and spectrum (if applicable) for each broadband provider... funding to collect the maximum advertised speed and technology type to which various classes of Community... businesses use the data to identify where broadband is available, the advertised speeds and other information...
49 CFR 236.701 - Application, brake; full service.
Code of Federal Regulations, 2010 CFR
2010-10-01
... a split reduction in brake pipe pressure at a service rate until maximum brake cylinder pressure is developed. As applied to an automatic or electro-pneumatic brake with speed governor control, an application other than emergency which develops the maximum brake cylinder pressure, as determined by the design of...
Ogata, Yuta; Anan, Masaya; Takahashi, Makoto; Takeda, Takuya; Tanimoto, Kenji; Sawada, Tomonori; Shinkoda, Koichi
The purpose of this study was to investigate between movement patterns of trunk extension from full unloaded flexion and lifting techniques, which could provide valuable information to physical therapists, doctors of chiropractic, and other manual therapists. A within-participant study design was used. Whole-body kinematic and kinetic data during lifting and full trunk flexion were collected from 16 healthy male participants using a 3-dimensional motion analysis system (Vicon Motion Systems). To evaluate the relationships of joint movement between lifting and full trunk flexion, Pearson correlation coefficients were calculated. There was no significant correlation between the amount of change in the lumbar extension angle during the first half of the lifting trials and lumbar movement during unloaded trunk flexion and extension. However, the amount of change in the lumbar extension angle during lifting was significantly negatively correlated with hip movement during unloaded trunk flexion and extension (P < .05). The findings that the maximum hip flexion angle during full trunk flexion had a greater influence on kinematics of lumbar-hip complex during lifting provides new insight into human movement during lifting. All study participants were healthy men; thus, findings are limited to this group. Copyright © 2018. Published by Elsevier Inc.
Highly accurate analytic formulae for projectile motion subjected to quadratic drag
NASA Astrophysics Data System (ADS)
Turkyilmazoglu, Mustafa
2016-05-01
The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Williams, Robert C; Elston, Robert C; Kumar, Pankaj; Knowler, William C; Abboud, Hanna E; Adler, Sharon; Bowden, Donald W; Divers, Jasmin; Freedman, Barry I; Igo, Robert P; Ipp, Eli; Iyengar, Sudha K; Kimmel, Paul L; Klag, Michael J; Kohn, Orly; Langefeld, Carl D; Leehey, David J; Nelson, Robert G; Nicholas, Susanne B; Pahl, Madeleine V; Parekh, Rulan S; Rotter, Jerome I; Schelling, Jeffrey R; Sedor, John R; Shah, Vallabh O; Smith, Michael W; Taylor, Kent D; Thameem, Farook; Thornley-Brown, Denyse; Winkler, Cheryl A; Guo, Xiuqing; Zager, Phillip; Hanson, Robert L
2016-05-04
The presence of population structure in a sample may confound the search for important genetic loci associated with disease. Our four samples in the Family Investigation of Nephropathy and Diabetes (FIND), European Americans, Mexican Americans, African Americans, and American Indians are part of a genome- wide association study in which population structure might be particularly important. We therefore decided to study in detail one component of this, individual genetic ancestry (IGA). From SNPs present on the Affymetrix 6.0 Human SNP array, we identified 3 sets of ancestry informative markers (AIMs), each maximized for the information in one the three contrasts among ancestral populations: Europeans (HAPMAP, CEU), Africans (HAPMAP, YRI and LWK), and Native Americans (full heritage Pima Indians). We estimate IGA and present an algorithm for their standard errors, compare IGA to principal components, emphasize the importance of balancing information in the ancestry informative markers (AIMs), and test the association of IGA with diabetic nephropathy in the combined sample. A fixed parental allele maximum likelihood algorithm was applied to the FIND to estimate IGA in four samples: 869 American Indians; 1385 African Americans; 1451 Mexican Americans; and 826 European Americans. When the information in the AIMs is unbalanced, the estimates are incorrect with large error. Individual genetic admixture is highly correlated with principle components for capturing population structure. It takes ~700 SNPs to reduce the average standard error of individual admixture below 0.01. When the samples are combined, the resulting population structure creates associations between IGA and diabetic nephropathy. The identified set of AIMs, which include American Indian parental allele frequencies, may be particularly useful for estimating genetic admixture in populations from the Americas. Failure to balance information in maximum likelihood, poly-ancestry models creates biased estimates of individual admixture with large error. This also occurs when estimating IGA using the Bayesian clustering method as implemented in the program STRUCTURE. Odds ratios for the associations of IGA with disease are consistent with what is known about the incidence and prevalence of diabetic nephropathy in these populations.
IR luminescence of tellurium-doped silica-based optical fibre
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dianov, Evgenii M; Alyshev, S V; Shubin, Aleksei V
2012-03-31
Tellurium-doped germanosilicate fibre has been fabricated by the MCVD process. In contrast to Te-containing glasses studied earlier, it has a broad luminescence band (full width at half maximum of {approx}350 nm) centred at 1500 nm, with a lifetime of {approx}2 {mu}s. The luminescence of the fibre has been studied before and after gamma irradiation in a {sup 60}Co source to 309 and 992 kGy. The irradiation produced a luminescence band around 1100 nm, with a full width at half maximum of {approx}400 nm and lifetime of {approx}5 {mu}s. (letters)
Image improvement from a sodium-layer laser guide star adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C. E., LLNL
1997-06-01
A sodium-layer laser guide star beacon with high-order adaptive optics at Lick Observatory produced a factor of 2.4 intensity increase and a factor of 2 decrease in full width at half maximum for an astronomical point source, compared with image motion compensation alone. Image full widths at half maximum were identical for laser and natural guide stars (0.3 arc seconds). The Strehl ratio with the laser guide star was 65% of that with a natural guide star. This technique should allow ground-based telescopes to attain the diffraction limit, by correcting for atmospheric distortions.
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
EPA is developing approaches to inform the derivation of a Maximum Contaminant Level Goal (MCLG) for perchlorate in drinking water under the Safe Drinking Water Act. EPA previously conducted an independent, external, scientific peer review of the draft biologically-based dose-res...
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu
2016-10-01
One of the main goals in cosmology is to understand how the Universe evolves, how it forms structures, why it expands, and what is the nature of dark matter and dark energy. Next decade large and expensive observational projects will bring information on the structure and the distribution of many millions of galaxies at different redshifts enabling us to make great progress in answering these questions. However, these data require a very special and complex set of analysis tools to extract the maximum valuable information. Statistical inference techniques are being developed, bridging the gaps between theory, simulations, and observations. In particular, we discuss the efforts to address the question: What is the underlying nonlinear matter distribution and dynamics at any cosmic time corresponding to a set of observed galaxies in redshift space? An accurate reconstruction of the initial conditions encodes the full phase-space information at any later cosmic time (given a particular structure formation model and a set of cosmological parameters). We present advances to solve this problem in a self-consistent way with Big Data techniques of the Cosmic Web.
46 CFR 151.45-6 - Maximum amount of cargo.
Code of Federal Regulations, 2012 CFR
2012-10-01
... CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a) Tanks carrying liquids or liquefied gases at ambient temperatures regulated by this subchapter shall be limited in the amount of cargo loaded to that which will avoid the tank being liquid full at 105 °F if...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2012 CFR
2012-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2011 CFR
2011-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2014 CFR
2014-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2013 CFR
2013-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2010 CFR
2010-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...
NASA Technical Reports Server (NTRS)
Cook, Harvey A; Heinicke, Orville H; Haynie, William H
1947-01-01
An investigation was conducted on a full-scale air-cooled cylinder in order to establish an effective means of maintaining maximum-economy spark timing with varying engine operating conditions. Variable fuel-air-ratio runs were conducted in which relations were determined between the spark travel, and cylinder-pressure rise. An instrument for controlling spark timing was developed that automatically maintained maximum-economy spark timing with varying engine operating conditions. The instrument also indicated the occurrence of preignition.
Power optimal single-axis articulating strategies
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Heck, Michael L.
1991-01-01
Power optimal single axis articulating PV array motion for Space Station Freedom is investigated. The motivation is to eliminate one of the articular joints to reduce Station costs. Optimal (maximum power) Beta tracking is addressed for local vertical local horizontal (LVLH) and non-LVLH attitudes. Effects of intra-array shadowing are also presented. Maximum power availability while Beta tracking is compared to full sun tracking and optimal alpha tracking. The results are quantified in orbital and yearly minimum, maximum, and average values of power availability.
Human vision is determined based on information theory.
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-03
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
NASA Astrophysics Data System (ADS)
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-01-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236
Interactive communication channel
NASA Astrophysics Data System (ADS)
Chan, R. H.; Mann, M. R.; Ciarrocchi, J. A.
1985-10-01
Discussed is an interactive communications channel (ICC) for providing a digital computer with high-performance multi-channel interfaces. Sixteen full duplex channels can be serviced in the ICC with the sequence or scan pattern being programmable and dependent upon the number or channels and their speed. A channel buffer system is used for line interface, and character exchange. The channel buffer system is on a byte basis. The ICC performs frame start and frame end functions, bit stripping and bit stuffing. Data is stored in a memory in block format (256 bytes maximum) by a program control and the ICC maintains byte address information and a block byte count. Data exchange with a memory is made by cycle steals. Error detection is also provided for using a cyclic redundancy check technique.
Information transmission over an amplitude damping channel with an arbitrary degree of memory
NASA Astrophysics Data System (ADS)
D'Arrigo, Antonio; Benenti, Giuliano; Falci, Giuseppe; Macchiavello, Chiara
2015-12-01
We study the performance of a partially correlated amplitude damping channel acting on two qubits. We derive lower bounds for the single-shot classical capacity by studying two kinds of quantum ensembles, one which allows us to maximize the Holevo quantity for the memoryless channel and the other allowing the same task but for the full-memory channel. In these two cases we also show the amount of entanglement which is involved in achieving the maximum of the Holevo quantity. For the single-shot quantum capacity we discuss both a lower and an upper bound, achieving a good estimate for high values of the channel transmissivity. We finally compute the entanglement-assisted classical channel capacity.
On a full Bayesian inference for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
NASA Astrophysics Data System (ADS)
Hämmerle, M.; Lukač, N.; Chen, K.-C.; Koma, Zs.; Wang, C.-K.; Anders, K.; Höfle, B.
2017-09-01
Information about the 3D structure of understory vegetation is of high relevance in forestry research and management (e.g., for complete biomass estimations). However, it has been hardly investigated systematically with state-of-the-art methods such as static terrestrial laser scanning (TLS) or laser scanning from unmanned aerial vehicle platforms (ULS). A prominent challenge for scanning forests is posed by occlusion, calling for proper TLS scan position or ULS flight line configurations in order to achieve an accurate representation of understory vegetation. The aim of our study is to examine the effect of TLS or ULS scanning strategies on (1) the height of individual understory trees and (2) understory canopy height raster models. We simulate full-waveform TLS and ULS point clouds of a virtual forest plot captured from various combinations of max. 12 TLS scan positions or 3 ULS flight lines. The accuracy of the respective datasets is evaluated with reference values given by the virtually scanned 3D triangle mesh tree models. TLS tree height underestimations range up to 1.84 m (15.30 % of tree height) for single TLS scan positions, but combining three scan positions reduces the underestimation to maximum 0.31 m (2.41 %). Combining ULS flight lines also results in improved tree height representation, with a maximum underestimation of 0.24 m (2.15 %). The presented simulation approach offers a complementary source of information for efficient planning of field campaigns aiming at understory vegetation modelling.
Pneumatic strength assessment device: design and isometric measurement.
Paulus, David C; Reiser, Raoul F; Troxell, Wade O
2004-01-01
In order to load a muscle optimally during resistance exercise, it should be heavily taxed throughout the entire range of motion for that exercise. However, traditional constant resistance squats only tax the lower-extremity muscles to their limits at the "sticking region" or a critical joint configuration of the exercise cycle. Therefore, a linear motion (Smith) exercise machine was modified with pneumatics and appropriate computer control so that it could be capable of adjusting force to control velocity within a repetition of the squat exercise or other exercise performed with the device. Prior to application of this device in a dynamic squat setting, the maximum voluntary isometric force (MVIF) produced over a spectrum of knee angles is needed. This would reveal the sticking region and overall variation in strength capacity. Five incremental knee angles (90, 110, 130, 150, and 170 degrees, where 180 degrees defined full extension) were examined. After obtaining university-approved informed consent, 12 men and 12 women participated in the study. The knee angle was set, and the pneumatic cylinder was pressurized such that the subject could move the barbell slightly but no more than two-centimeters. The peak pressure exerted over a five-second maximum effort interval was recorded at each knee angle in random order and then repeated. The average of both efforts was then utilized for further analysis. The sticking region occurred consistently at a 90 degrees knee angle, however, the maximum force produced varied between 110 degrees and 170 degrees with the greatest frequency at 150 degrees for both men and women. The percent difference between the maximum and minimum MVIF was 46% for men and 57% for women.
Comparison of full width at half maximum and penumbra of different Gamma Knife models.
Asgari, Sepideh; Banaee, Nooshin; Nedaie, Hassan Ali
2018-01-01
As a radiosurgical tool, Gamma Knife has the best and widespread name recognition. Gamma Knife is a noninvasive intracranial technique invented and developed by Swedish neurosurgeon Lars Leksell. The first commercial Leksell Gamma Knife entered the therapeutic armamentarium at the University of Pittsburgh in the United States on August 1987. Since that time, different generation of Gamma Knife developed. In this study, the technical points and dosimetric parameters including full width at half maximum and penumbra on different generation of Gamma Knife will be reviewed and compared. The results of this review study show that the rotating gamma system provides a better dose conformity.
Single-photon imager based on a superconducting nanowire delay line
NASA Astrophysics Data System (ADS)
Zhao, Qing-Yuan; Zhu, Di; Calandri, Niccolò; Dane, Andrew E.; McCaughan, Adam N.; Bellei, Francesco; Wang, Hao-Zhu; Santavicca, Daniel F.; Berggren, Karl K.
2017-03-01
Detecting spatial and temporal information of individual photons is critical to applications in spectroscopy, communication, biological imaging, astronomical observation and quantum-information processing. Here we demonstrate a scalable single-photon imager using a single continuous superconducting nanowire that is not only a single-photon detector but also functions as an efficient microwave delay line. In this context, photon-detection pulses are guided in the nanowire and enable the readout of the position and time of photon-absorption events from the arrival times of the detection pulses at the nanowire's two ends. Experimentally, we slowed down the velocity of pulse propagation to ∼2% of the speed of light in free space. In a 19.7 mm long nanowire that meandered across an area of 286 × 193 μm2, we were able to resolve ∼590 effective pixels with a temporal resolution of 50 ps (full width at half maximum). The nanowire imager presents a scalable approach for high-resolution photon imaging in space and time.
Ekama, G A; Marais, P
2004-02-01
The applicability of the one-dimensional idealized flux theory (1DFT) for the design of secondary settling tanks (SSTs) is evaluated by comparing its predicted maximum surface overflow (SOR) and solids loading (SLR) rates with that calculated with the two-dimensional computational fluid dynamics model SettlerCAD using as a basis 35 full-scale SST stress tests conducted on different SSTs with diameters from 30 to 45m and 2.25-4.1m side water depth (SWD), with and without Stamford baffles. From the simulations, a relatively consistent pattern appeared, i.e. that the 1DFT can be used for design but its predicted maximum SLR needs to be reduced by an appropriate flux rating, the magnitude of which depends mainly on SST depth and hydraulic loading rate (HLR). Simulations of the Watts et al. (Water Res. 30(9)(1996)2112) SST, with doubled SWDs and the Darvill new (4.1m) and old (2.5m) SSTs with interchanged depths, were run to confirm the sensitivity of the flux rating to depth and HLR. Simulations with and without a Stamford baffle were also performed. While the design of the internal features of the SST, such as baffling, has a marked influence on the effluent SS concentration while the SST is underloaded, these features appeared to have only a small influence on the flux rating, i.e. capacity, of the SST. Until more information is obtained, it would appear from the simulations that the flux rating of 0.80 of the 1DFT maximum SLR recommended by Ekama and Marais (Water Pollut. Control 85(1)(1986)101) remains a reasonable value to apply in the design of full-scale SSTs-for deep SSTs (4m SWD) the flux rating could be increased to 0.85 and for shallow SSTs (2.5m SWD) decreased to 0.75. It is recommended that (i) while the apparent interrelationship between SST flux rating and depth suggests some optimization of the volume of the SST, this be avoided and (ii) the depth of the SST be designed independently of the surface area as is usually the practice and once selected, the appropriate flux rating applied to the 1DFT estimate of the surface area.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-22
... information about the cost to elect less than the maximum survivor annuity. This letter may be used to decline... about the cost to elect the maximum survivor annuity. This letter may be used to ask for more... who do not have a former spouse who is entitled to a survivor annuity benefit. RI 20-63B is for those...
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1994-01-01
In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.
Fleetcroft, Robert; Steel, Nicholas; Cookson, Richard; Howe, Amanda
2008-06-17
The 2003 revision of the UK GMS contract rewards general practices for performance against clinical quality indicators. Practices can exempt patients from treatment, and can receive maximum payment for less than full coverage of eligible patients. This paper aims to estimate the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care (the pay-performance gap), and to estimate how much of the gap is attributable respectively to thresholds and to exception reporting. Analysis of Quality Outcomes Framework data in the National Primary Care Database and exception reporting data from the Information Centre from 8407 practices in England in 2005 - 6. The main outcome measures were the gap between the percentage of maximum incentive gained and the percentage of patients receiving indicated care at the practice level, both for individual indicators and a combined composite score. An additional outcome was the percentage of that gap attributable respectively to exception reporting and maximum threshold targets set at less than 100%. The mean pay-performance gap for the 65 aggregated clinical indicators was 13.3% (range 2.9% to 48%). 52% of this gap (6.9% of eligible patients) is attributable to thresholds being set at less than 100%, and 48% to patients being exception reported. The gap was greater than 25% in 9 indicators: beta blockers and cholesterol control in heart disease; cholesterol control in stroke; influenza immunization in asthma; blood pressure, sugar and cholesterol control in diabetes; seizures in epilepsy and treatment of hypertension. Threshold targets and exception reporting introduce an incentive ceiling, which substantially reduces the percentage of eligible patients that UK practices need to treat in order to receive maximum incentive payments for delivering that care. There are good clinical reasons for exception reporting, but after unsuitable patients have been exempted from treatment, there is no reason why all maximum thresholds should not be 100%, whilst retaining the current lower thresholds to provide incentives for lower performing practices.
NASA Astrophysics Data System (ADS)
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
Electrical conductivity of the Earth's mantle after one year of SWARM magnetic field measurements
NASA Astrophysics Data System (ADS)
Civet, François; Thebault, Erwan; Verhoeven, Olivier; Langlais, Benoit; Saturnino, Diana
2015-04-01
We present a global EM induction study using L1b Swarm satellite magnetic field measurements data down to a depth of 2000 km. Starting from raw measurements, we first derive a model for the main magnetic field, correct the data for a lithospheric field model, and further select the data to reduce the contributions of the ionospheric field. These computations allowed us to keep a full control on the data processes. We correct residual field from outliers and estimate the spherical harmonic coefficients of the transient field for periods between 2 and 256 days. We used full latitude range and all local times to keep a maximum amount of data. We perform a Bayesian inversion and construct a Markov chain during which model parameters are randomly updated at each iteration. We first consider regular layers of equal thickness and extra layers are added where conductivity contrast between successive layers exceed a threshold value. The mean and maximum likelihood of the electrical conductivity profile is then estimated from the probability density function. The obtained profile particularly shows a conductivity jump in the 600-700 km depth range, consistent with the olivine phase transition at 660 km depth. Our study is the first one to show such a conductivity increase in this depth range without any a priori informations on the internal strucutres. Assuming a pyrolitic mantle composition, this profile is interpreted in terms of temperature variations in the depth range where the probability density function is the narrowest. We finally obtained a temperature gradient in the lower mantle close to adiabatic.
Molecular and Clinical Characterization of Chikungunya Virus Infections in Southeast Mexico.
Galán-Huerta, Kame A; Martínez-Landeros, Erik; Delgado-Gallegos, Juan L; Caballero-Sosa, Sandra; Malo-García, Iliana R; Fernández-Salas, Ildefonso; Ramos-Jiménez, Javier; Rivas-Estilla, Ana M
2018-05-09
Chikungunya fever is an arthropod-borne infection caused by Chikungunya virus (CHIKV). Even though clinical features of Chikungunya fever in the Mexican population have been described before, there is no detailed information. The aim of this study was to perform a full description of the clinical features in confirmed Chikungunya-infected patients and describe the molecular epidemiology of CHIKV. We evaluated febrile patients who sought medical assistance in Tapachula, Chiapas, Mexico, from June through July 2015. Infection was confirmed with molecular and serological methods. Viruses were isolated and the E1 gene was sequenced. Phylogeny reconstruction was inferred using maximum-likelihood and maximum clade credibility approaches. We studied 52 patients with confirmed CHIKV infection. They were more likely to have wrist, metacarpophalangeal, and knee arthralgia. Two combinations of clinical features were obtained to differentiate between Chikungunya fever and acute undifferentiated febrile illness. We obtained 10 CHIKV E1 sequences that grouped with the Asian lineage. Seven strains diverged from the formerly reported. Patients infected with the divergent CHIKV strains showed a broader spectrum of clinical manifestations. We defined the complete clinical features of Chikungunya fever in patients from Southeastern Mexico. Our results demonstrate co-circulation of different CHIKV strains in the state of Chiapas.
van Dam, Herman T; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R
2013-05-21
Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm(3), 16 × 16 × 20 mm(3), 24 × 24 × 10 mm(3), and 24 × 24 × 20 mm(3). The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm(3) LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.
NASA Astrophysics Data System (ADS)
van Dam, Herman T.; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R.
2013-05-01
Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm3, 16 × 16 × 20 mm3, 24 × 24 × 10 mm3, and 24 × 24 × 20 mm3. The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm3 LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
45 CFR 2552.25 - What are a sponsor's administrative responsibilities?
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Assume full responsibility for securing maximum and continuing community financial and in-kind support to operate the project successfully. (b) Provide levels of staffing and resources appropriate to accomplish the purposes of the project and carry out its project management responsibilities. (c) Employ a full...
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Seitz, Kurt; Heck, Bernhard
2017-03-01
National height reference systems have conventionally been linked to the local mean sea level, observed at individual tide gauges. Due to variations in the sea surface topography, the reference levels of these systems are inconsistent, causing height datum offsets of up to ±1-2 m. For the unification of height systems, a satellite-based method is presented that utilizes global geopotential models (GGMs) derived from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE). In this context, height datum offsets are estimated within a least squares adjustment by comparing the GGM information with measured GNSS/leveling data. While the GNSS/leveling data comprises the full spectral information, GOCE GGMs are restricted to long wavelengths according to the maximum degree of their spherical harmonic representation. To provide accurate height datum offsets, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. Therefore, a combination of the GOCE information with the high-resolution Earth Gravitational Model 2008 (EGM2008) is performed. The main contribution of this paper is to analyze the benefit, when high-frequency topography-implied gravity signals are additionally used to reduce the remaining omission error of EGM2008. In terms of a spectral extension, a new method is proposed that does not rely on an assumed spectral consistency of topographic heights and implied gravity as is the case for the residual terrain modeling (RTM) technique. In the first step of this new approach, gravity forward modeling based on tesseroid mass bodies is performed according to the Rock-Water-Ice (RWI) approach. In a second step, the resulting full spectral RWI-based topographic potential values are reduced by the effect of the topographic gravity field model RWI_TOPO_2015, thus, removing the long to medium wavelengths. By using the latest GOCE GGMs, the impact of topography-implied gravity signals on the estimation of height datum offsets is analyzed in detail for representative GNSS/leveling data sets in Germany, Austria, and Brazil. Besides considerable changes in the estimated offset of up to 3 cm, the conducted analyses show that significant improvements of 30-40% can be achieved in terms of a reduced standard deviation and range of the least squares adjusted residuals.
Liu, Yang; Qian, Chenyun; Ding, Sihui; Shang, Xulan; Yang, Wanxia; Fang, Shengzuo
2016-12-01
As a highly valued and multiple function tree species, Cyclocarya paliurus is planted and managed for timber production and medical use. However, limited information is available on its genotype selection and cultivation for growth and phytochemicals. Responses of growth and secondary metabolites to light regimes and genotypes are useful information to determine suitable habitat conditions for the cultivation of medicinal plants. Both light regime and provenance significantly affected the leaf characteristics, leaf flavonoid contents, biomass production and flavonoid accumulation per plant. Leaf thickness, length of palisade cells and chlorophyll a/b decreased significantly under shading conditions, while leaf areas and total chlorophyll content increased obviously. In the full light condition, leaf flavonoid contents showed a bimodal temporal variation pattern with the maximum observed in August and the second peak in October, while shading treatment not only reduced the leaf content of flavonoids but also delayed the peak appearing of the flavonoid contents in the leaves of C. paliurus. Strong correlations were found between leaf thickness, palisade length, monthly light intensity and measured flavonoid contents in the leaves of C. paliurus. Muchuan provenance with full light achieved the highest leaf biomass and flavonoid accumulation per plant. Cyclocarya paliurus genotypes show diverse responses to different light regimes in leaf characteristics, biomass production and flavonoid accumulation, highlighting the opportunity for extensive selection in the leaf flavonoid production.
Black bear density in Glacier National Park, Montana
Stetz, Jeff B.; Kendall, Katherine C.; Macleod, Amy C.
2013-01-01
We report the first abundance and density estimates for American black bears (Ursus americanus) in Glacier National Park (NP),Montana, USA.We used data from 2 independent and concurrent noninvasive genetic sampling methods—hair traps and bear rubs—collected during 2004 to generate individual black bear encounter histories for use in closed population mark–recapture models. We improved the precision of our abundance estimate by using noninvasive genetic detection events to develop individual-level covariates of sampling effort within the full and one-half mean maximum distance moved (MMDM) from each bear’s estimated activity center to explain capture probability heterogeneity and inform our estimate of the effective sampling area.Models including the one-halfMMDMcovariate received overwhelming Akaike’s Information Criterion support suggesting that buffering our study area by this distance would be more appropriate than no buffer or the full MMDM buffer for estimating the effectively sampled area and thereby density. Our modelaveraged super-population abundance estimate was 603 (95% CI¼522–684) black bears for Glacier NP. Our black bear density estimate (11.4 bears/100 km2, 95% CI¼9.9–13.0) was consistent with published estimates for populations that are sympatric with grizzly bears (U. arctos) and without access to spawning salmonids. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
14 CFR 29.1045 - Climb cooling test procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... continuous power (or at full throttle when above the critical altitude); (2) For helicopters for which the... the critical altitude); and (3) For other rotorcraft, be at maximum continuous power (or at full throttle when above the critical altitude). (d) After temperatures have stabilized in flight, the climb...
14 CFR 29.1045 - Climb cooling test procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... continuous power (or at full throttle when above the critical altitude); (2) For helicopters for which the... the critical altitude); and (3) For other rotorcraft, be at maximum continuous power (or at full throttle when above the critical altitude). (d) After temperatures have stabilized in flight, the climb...
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2015-01-01
The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.
Wu, Ye; Li, Xiaoming; Wei, Yi; Gu, Yu; Zeng, Haibo
2017-12-21
Photo-communication has attracted great attention because of the rapid development of wireless information transmission technology. However, it is still a great challenge in cryptography communications, where it is greatly weakened by the openness of the light channels. Here, visible-infrared dual-mode narrowband perovskite photodetectors were fabricated and a new photo-communication encryption technique was proposed. For the first time, highly narrowband and two-photon absorption (TPA) resultant photoresponses within a single photodetector are demonstrated. The full width at half maximum (FWHM) of the photoresponse is as narrow as 13.6 nm in the visible range, which is superior to state-of-the-art narrowband photodetectors. Furthermore, these two merits of narrowband and TPA characteristics are utilized to encrypt the photo-communication based on the above photodetectors. When sending information and noise signals with 532 and 442 nm laser light simultaneously, the perovskite photodetectors only receive the main information, while the commercial Si photodetector responds to both lights, losing the main information completely. The final data are determined by the secret key through the TPA process as preset. Such narrowband and TPA detection abilities endow the perovskite photodetectors with great potential in future security communication and also provide new opportunities and platforms for encryption techniques.
77 FR 76169 - Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
.... Correspondence $9,324.89. Post 9/11 Entitlement Charge Amount for Tests Licensing and Certification Tests... DEPARTMENT OF VETERANS AFFAIRS Increase in Maximum Tuition and Fee Amounts Payable under the Post... this notice is to inform the public of the increase in the Post-9/11 GI Bill maximum tuition and fee...
Yuan, Fanglong; Yuan, Ting; Sui, Laizhi; Wang, Zhibin; Xi, Zifan; Li, Yunchao; Li, Xiaohong; Fan, Louzhen; Tan, Zhan'ao; Chen, Anmin; Jin, Mingxing; Yang, Shihe
2018-06-08
Carbon quantum dots (CQDs) have emerged as promising materials for optoelectronic applications on account of carbon's intrinsic merits of high stability, low cost, and environment-friendliness. However, the CQDs usually give broad emission with full width at half maximum exceeding 80 nm, which fundamentally limit their display applications. Here we demonstrate multicolored narrow bandwidth emission (full width at half maximum of 30 nm) from triangular CQDs with a quantum yield up to 54-72%. Detailed structural and optical characterizations together with theoretical calculations reveal that the molecular purity and crystalline perfection of the triangular CQDs are key to the high color-purity. Moreover, multicolored light-emitting diodes based on these CQDs display good stability, high color-purity, and high-performance with maximum luminance of 1882-4762 cd m -2 and current efficiency of 1.22-5.11 cd A -1 . This work will set the stage for developing next-generation high-performance CQDs-based light-emitting diodes.
Development of an all-in-one gamma camera/CCD system for safeguard verification
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo
2014-12-01
For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.
Essential Concepts in Modern Health Services
El Taguri, A
2008-01-01
Health services have the functions to define community health problems, to identify unmet needs and survey the resources to meet them, to establish SMART objectives, and to project administrative actions to accomplish the purpose of proposed action programs. For maximum efficacy, health systems should rely on newer approaches of management as management-by-objectives, risk-management, and performance management with full and equal participation from professionals and consumers. The public should be well informed about their needs and what is expected from them to improve their health. Inefficient use of budget allocated to health services should be prevented by tools like performance management and clinical governance. Data processed to information and intelligence is needed to deal with changing disease patterns and to encourage policies that could manage with the complex feedback system of health. e-health solutions should be instituted to increase effectiveness and improve efficiency and informing human resources and populations. Suitable legislations should be introduced including those that ensure coordination between different sectors. Competent workforce should be given the opportunity to receive lifetime appropriate adequate training. External continuous evaluation using appropriate indicators is vital. Actions should be done both inside and outside the health sector to monitor changes and overcome constraints. PMID:21499457
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.
2012-01-01
Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.
Communication methods, systems, apparatus, and devices involving RF tag registration
Burghard, Brion J [W. Richland, WA; Skorpik, James R [Kennewick, WA
2008-04-22
One technique of the present invention includes a number of Radio Frequency (RF) tags that each have a different identifier. Information is broadcast to the tags from an RF tag interrogator. This information corresponds to a maximum quantity of tag response time slots that are available. This maximum quantity may be less than the total number of tags. The tags each select one of the time slots as a function of the information and a random number provided by each respective tag. The different identifiers are transmitted to the interrogator from at least a subset of the RF tags.
Chen, Pei-Yu; Fedosejevs, Gunar; Tiscareño-López, Mario; Arnold, Jeffrey G
2006-08-01
Although several types of satellite data provide temporal information of the land use at no cost, digital satellite data applications for agricultural studies are limited compared to applications for forest management. This study assessed the suitability of vegetation indices derived from the TERRA-Moderate Resolution Imaging Spectroradiometer (MODIS) sensor and SPOT-VEGETATION (VGT) sensor for identifying corn growth in western Mexico. Overall, the Normalized Difference Vegetation Index (NDVI) composites from the VGT sensor based on bi-directional compositing method produced vegetation information most closely resembling actual crop conditions. The NDVI composites from the MODIS sensor exhibited saturated signals starting 30 days after planting, but corresponded to green leaf senescence in April. The temporal NDVI composites from the VGT sensor based on the maximum value method had a maximum plateau for 80 days, which masked the important crop transformation from vegetative stage to reproductive stage. The Enhanced Vegetation Index (EVI) composites from the MODIS sensor reached a maximum plateau 40 days earlier than the occurrence of maximum leaf area index (LAI) and maximum intercepted fraction of photosynthetic active radiation (fPAR) derived from in-situ measurements. The results of this study showed that the 250-m resolution MODIS data did not provide more accurate vegetation information for corn growth description than the 500-m and 1000-m resolution MODIS data.
Maximum-likelihood block detection of noncoherent continuous phase modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1993-01-01
This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.
Report #2007-P-00036, September 19, 2007. EPA does not have comprehensive information on the outcomes of the Total Maximum Daily Load (TMDL) program nationwide, nor national data on TMDL implementation activities.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
Eco-label conveys reliable information on fish stock health to seafood consumers.
Gutiérrez, Nicolás L; Valencia, Sarah R; Branch, Trevor A; Agnew, David J; Baum, Julia K; Bianchi, Patricia L; Cornejo-Donoso, Jorge; Costello, Christopher; Defeo, Omar; Essington, Timothy E; Hilborn, Ray; Hoggarth, Daniel D; Larsen, Ashley E; Ninnes, Chris; Sainsbury, Keith; Selden, Rebecca L; Sistla, Seeta; Smith, Anthony D M; Stern-Pirlot, Amanda; Teck, Sarah J; Thorson, James T; Williams, Nicholas E
2012-01-01
Concerns over fishing impacts on marine populations and ecosystems have intensified the need to improve ocean management. One increasingly popular market-based instrument for ecological stewardship is the use of certification and eco-labeling programs to highlight sustainable fisheries with low environmental impacts. The Marine Stewardship Council (MSC) is the most prominent of these programs. Despite widespread discussions about the rigor of the MSC standards, no comprehensive analysis of the performance of MSC-certified fish stocks has yet been conducted. We compared status and abundance trends of 45 certified stocks with those of 179 uncertified stocks, finding that 74% of certified fisheries were above biomass levels that would produce maximum sustainable yield, compared with only 44% of uncertified fisheries. On average, the biomass of certified stocks increased by 46% over the past 10 years, whereas uncertified fisheries increased by just 9%. As part of the MSC process, fisheries initially go through a confidential pre-assessment process. When certified fisheries are compared with those that decline to pursue full certification after pre-assessment, certified stocks had much lower mean exploitation rates (67% of the rate producing maximum sustainable yield vs. 92% for those declining to pursue certification), allowing for more sustainable harvesting and in many cases biomass rebuilding. From a consumer's point of view this means that MSC-certified seafood is 3-5 times less likely to be subject to harmful fishing than uncertified seafood. Thus, MSC-certification accurately identifies healthy fish stocks and conveys reliable information on stock status to seafood consumers.
Eco-Label Conveys Reliable Information on Fish Stock Health to Seafood Consumers
Gutiérrez, Nicolás L.; Valencia, Sarah R.; Branch, Trevor A.; Agnew, David J.; Baum, Julia K.; Bianchi, Patricia L.; Cornejo-Donoso, Jorge; Costello, Christopher; Defeo, Omar; Essington, Timothy E.; Hilborn, Ray; Hoggarth, Daniel D.; Larsen, Ashley E.; Ninnes, Chris; Sainsbury, Keith; Selden, Rebecca L.; Sistla, Seeta; Smith, Anthony D. M.; Stern-Pirlot, Amanda; Teck, Sarah J.; Thorson, James T.; Williams, Nicholas E.
2012-01-01
Concerns over fishing impacts on marine populations and ecosystems have intensified the need to improve ocean management. One increasingly popular market-based instrument for ecological stewardship is the use of certification and eco-labeling programs to highlight sustainable fisheries with low environmental impacts. The Marine Stewardship Council (MSC) is the most prominent of these programs. Despite widespread discussions about the rigor of the MSC standards, no comprehensive analysis of the performance of MSC-certified fish stocks has yet been conducted. We compared status and abundance trends of 45 certified stocks with those of 179 uncertified stocks, finding that 74% of certified fisheries were above biomass levels that would produce maximum sustainable yield, compared with only 44% of uncertified fisheries. On average, the biomass of certified stocks increased by 46% over the past 10 years, whereas uncertified fisheries increased by just 9%. As part of the MSC process, fisheries initially go through a confidential pre-assessment process. When certified fisheries are compared with those that decline to pursue full certification after pre-assessment, certified stocks had much lower mean exploitation rates (67% of the rate producing maximum sustainable yield vs. 92% for those declining to pursue certification), allowing for more sustainable harvesting and in many cases biomass rebuilding. From a consumer’s point of view this means that MSC-certified seafood is 3–5 times less likely to be subject to harmful fishing than uncertified seafood. Thus, MSC-certification accurately identifies healthy fish stocks and conveys reliable information on stock status to seafood consumers. PMID:22928029
33 CFR 183.41 - Persons capacity: Outboard boats.
Code of Federal Regulations, 2010 CFR
2010-07-01
... § 183.35 for the boat minus the motor and control weight, battery weight (dry), and full portable fuel... control weight, battery weight, and full portable fuel tank weight, if any, shown in table 4 of subpart H of this part for the maximum horsepower capacity marked on the boat. Permanently installed fuel tanks...
Wind Tunnel Measurements of the Wake of a Full-Scale UH-60A Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Wadcock, Alan J.; Yamauchi, Gloria K.; Schairer, Edward T.
2013-01-01
A full-scale UH-60A rotor was tested in the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel in May 2010. The test was designed to acquire a suite of measurements to validate state-of-the-art modeling tools. Measurements include blade airloads (from a single pressure-instrumented blade), blade structural loads (strain gages), rotor performance (rotor balance and torque measurements), blade deformation (stereo-photogrammetry), and rotor wake measurements (Particle Image Velocimetry (PIV) and Retro-reflective Backward Oriented Schlieren (RBOS)). During the test, PIV measurements of flow field velocities were acquired in a stationary cross-flow plane located on the advancing side of the rotor disk at approximately 90 deg rotor azimuth. At each test condition, blade position relative to the measurement plane was varied. The region of interest (ROI) was 4-ft high by 14-ft wide and covered the outer half of the blade radius. Although PIV measurements were acquired in only one plane, much information can be gleaned by studying the rotor wake trajectory in this plane, especially when such measurements are augmented by blade airloads and RBOS data. This paper will provide a comparison between PIV and RBOS measurements of tip vortex position and vortex filament orientation for multiple rotor test conditions. Blade displacement measurements over the complete rotor disk will also be presented documenting blade-to-blade differences in tip-path-plane and providing additional information for correlation with PIV and RBOS measurements of tip vortex location. In addition, PIV measurements of tip vortex core diameter and strength will be presented. Vortex strength will be compared with measurements of maximum bound circulation on the rotor blade determined from pressure distributions obtained from 235 pressure sensors distributed over 9 radial stations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... [Docket No. NHTSA-2012-0131; Notice 1] RIN 2127-AL16 Civil Penalties AGENCY: National Highway Traffic... proposes to increase the maximum civil penalty amounts for violations of motor vehicle safety requirements... and consumer information provisions. Specifically, this proposes increases in maximum civil penalty...
Code of Federal Regulations, 2010 CFR
2010-04-01
... father for any month in which you have in your care a child of the worker on whose record you are... (see §§ 404.304 and 404.333) are reduced first (if necessary) for the family maximum under § 404.403... reduction for the family maximum under § 404.403), is reduced or further reduced based on the number of...
In-flight source noise of an advanced full-scale single-rotation propeller
NASA Technical Reports Server (NTRS)
Woodward, Richard P.; Loffler, Irvin J.
1991-01-01
Flight tests to define the far-field tone source at cruise conditions have been completed on the full-scale SR-7L advanced turboprop, which was installed on the left wing of a Gulfstream II aircraft. These measurements defined source levels for input into long-distance propagation models to predict en route noise. Infight data were taken for seven test cases. The sideline directivities measured showed expected maximum levels near 105 deg from the propeller upstream axis. However, azimuthal directivities based on the maximum observed sideline tone levels showed highest levels below the aircraft. The tone level reduction associated with reductions in propeller tip speed is shown to be more significant in the horizontal plane than below the aircraft.
NASA Technical Reports Server (NTRS)
Hood, Manley J; White, James A
1933-01-01
Some preliminary results of full scale wind tunnel testing to determine the best means of reducing the tail buffeting and wing-fuselage interference of a low-wing monoplane are given. Data indicating the effects of an engine cowling, fillets, auxiliary airfoils of short span, reflexes trailing edge, propeller slipstream, and various combinations of these features are included. The best all-round results were obtained by the use of fillets together with the National Advisory Committee for Aeronautics (NACA) cowling. This combination reduced the tail buffeting oscillations to one-fourth of their original amplitudes, increased the maximum lift 11 percent, decreased the minimum drag 9 percent, and increased the maximum ratio of lift to drag 19 percent.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
High-speed holographic system for full-field transient vibrometry of the human tympanic membrane
NASA Astrophysics Data System (ADS)
Dobrev, I.; Harrington, E. J.; Cheng, T.; Furlong, C.; Rosowski, J. J.
2014-07-01
Understanding of the human hearing process requires the quantification of the transient response of the human ear and the human tympanic membrane (TM or eardrum) in particular. Current state-of-the-art medical methods to quantify the transient acousto-mechanical response of the TM provide only averaged acoustic or local information at a few points. This may be insufficient to fully describe the complex patterns unfolding across the full surface of the TM. Existing engineering systems for full-field nanometer measurements of transient events, typically based on holographic methods, constrain the maximum sampling speed and/or require complex experimental setups. We have developed and implemented of a new high-speed (i.e., > 40 Kfps) holographic system (HHS) with a hybrid spatio-temporal local correlation phase sampling method that allows quantification of the full-field nanometer transient (i.e., > 10 kHz) displacement of the human TM. The HHS temporal accuracy and resolution is validated versus a LDV on both artificial membranes and human TMs. The high temporal (i.e., < 24 μs) and spatial (i.e., >100k data points) resolution of our HHS enables simultaneous measurement of the time waveform of the full surface of the TM. These capabilities allow for quantification of spatially-dependent motion parameters such as energy propagation delays surface wave speeds, which can be used to infer local material properties across the surface of the TM. The HHS could provide a new tool for the investigation of the auditory system with applications in medical research, in-vivo clinical diagnosis as well as hearing aids design.
NASA Astrophysics Data System (ADS)
Havemann, S.; Aumann, H. H.; Desouza-Machado, S. G.
2017-12-01
The HT-FRTC uses principal components which cover the spectrum at a very high spectral resolution allowing very fast line-by-line-like, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. Using data from IASI and from the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM BAE 146 aircraft, variational retrievals in principal component space with HT-FRTC as forward model have demonstrated that valuable information on temperature and humidity profiles and on the cirrus cloud properties can be obtained simultaneously. The NASA/JPL/UMBC cloudy RTM inter-comparison project has been working on a global dataset consisting of 7377 AIRS spectra. Initial simulations with HT-FRTC for this dataset have been promising. A next step taken here is to investigate how sensitive the results are with respect to different assumptions in the cloud modelling. One aspect of this is to study how assumptions about the microphysical and related optical properties of liquid/ice clouds impact the statistics of the agreement between model and observations. The other aspect is about the cloud overlap scheme. Different schemes have been tested (maximum, random, maximum random). As the computational cost increases linearly with the number of cloud columns, it will be investigated if there is an optimal number of columns beyond which there is little additional benefit to be gained. During daytime the high wave number channels of AIRS are affected by solar radiation. With full scattering calculations using a monochromatic version of the Edwards-Slingo radiation code the HT-FRTC can model solar radiation reasonably well, but full scattering calculations are relatively expensive. Pure Chou scaling on the other hand can not properly describe scattering of solar radiation by clouds and requires additional refinements.
NASA Astrophysics Data System (ADS)
Hwang, Ju Hyun; Lee, Hyun Jun; Shim, Yong Sub; Park, Cheol Hwee; Jung, Sun-Gyu; Kim, Kyu Nyun; Park, Young Wook; Ju, Byeong-Kwon
2015-01-01
Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays.Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06547f
Optimal protocol for maximum work extraction in a feedback process with a time-varying potential
NASA Astrophysics Data System (ADS)
Kwon, Chulan
2017-12-01
The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.
Clean Water Act Approved Total Maximum Daily Load (TMDL) Documents
Information from Approved and Established TMDL Documents as well as TMDLs that have been Withdrawn. This includes the pollutants identified in the TMDL Document, the 303(d) Listed Water(s) that the TMDL Document addresses and the associated Cause(s) of Impairment. The National Total Maximum Daily Load (TMDL) Tracking System (NTTS) contains information on waters that are Not Supporting their designated uses. These waters are listed by the state as impaired under Section 303(d) of the Clean Water Act.
Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?
NASA Astrophysics Data System (ADS)
Robertson, J. Gordon
2017-08-01
Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for moderate signal/noise work, it is preferable to carry out simulations for any actual or proposed Line Spread Function to find the effects of various sampling frequencies. Where spectrograph end-users have a choice of sampling frequencies, through on-chip binning and/or spectrograph configurations, it is desirable that the instrument user manual should include an examination of the effects of the various choices.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Advances in U.S. Land Imaging Capabilities
NASA Astrophysics Data System (ADS)
Stryker, T. S.
2017-12-01
Advancements in Earth observations, cloud computing, and data science are improving everyday life. Information from land-imaging satellites, such as the U.S. Landsat system, helps us to better understand the changing landscapes where we live, work, and play. This understanding builds capacity for improved decision-making about our lands, waters, and resources, driving economic growth, protecting lives and property, and safeguarding the environment. The USGS is fostering the use of land remote sensing technology to meet local, national, and global challenges. A key dimension to meeting these challenges is the full, free, and open provision of land remote sensing observations for both public and private sector applications. To achieve maximum impact, these data must also be easily discoverable, accessible, and usable. The presenter will describe the USGS Land Remote Sensing Program's current capabilities and future plans to collect and deliver land remote sensing information for societal benefit. He will discuss these capabilities in the context of national plans and policies, domestic partnerships, and international collaboration. The presenter will conclude with examples of how Landsat data is being used on a daily basis to improve lives and livelihoods.
Fragrance materials such as synthetic musks in aqueous samples, are normally determined by gas chromatography/mass spectrometry in the selected ion monitoring (SIM) mode to provide maximum sensitivity after liquid-liquid extraction of I -L samples. Full-scan mass spectra are requ...
38 CFR 17.606 - Award procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., whether part-time students or full-time students, but that amount may not exceed the maximum amount provided for in 38 U.S.C. 7613(b). (4) In the case of a part-time student who is a part-time employee, the.... (5) A full stipend may be paid only for the months the part-time student is attending classes...
NASA Technical Reports Server (NTRS)
Stickle, George W; Naiman, Irven; Crigler, John L
1940-01-01
Report presents the results of an investigation of full-scale nose-slot cowlings conducted in the NACA 20-foot wind tunnel to furnish information on the pressure drop available for cooling. Engine conductances from 0 to 0.12 and exit-slot conductances from 0 to 0.30 were covered. Two basic nose shapes were tested to determine the effect of the radius of curvature of the nose contour; the nose shape with the smaller radius of curvature gave the higher pressure drop across the engine. The best axial location of the slot for low-speed operation was found to be in the region of maximum negative pressure for the basic shape for the particular operating condition. The effect of the pressure operating condition on the available cooling pressure is shown.
Electroencephalography as a post-stroke assessment method: An updated review.
Monge-Pereira, E; Molina-Rueda, F; Rivas-Montero, F M; Ibáñez, J; Serrano, J I; Alguacil-Diego, I M; Miangolarra-Page, J C
Given that stroke is currently a serious problem in the population, employing more reliable and objective techniques for determining diagnosis and prognosis is necessary in order to enable effective clinical decision-making. EEG is a simple, low-cost, non-invasive tool that can provide information about the changes occurring in the cerebral cortex during the recovery process after stroke. EEG provides data on the evolution of cortical activation patterns which can be used to establish a prognosis geared toward harnessing each patient's full potential. This strategy can be used to prevent compensation and maladaptive plasticity, redirect treatments, and develop new interventions that will let stroke patients reach their new maximum motor levels. Copyright © 2014 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
The dependence of graphene Raman D-band on carrier density.
Liu, Junku; Li, Qunqing; Zou, Yuan; Qian, Qingkai; Jin, Yuanhao; Li, Guanhong; Jiang, Kaili; Fan, Shoushan
2013-01-01
Raman spectroscopy has been an integral part of graphene research and can provide information about graphene structure, electronic characteristics, and electron-phonon interactions. In this study, the characteristics of the graphene Raman D-band, which vary with carrier density, are studied in detail, including the frequency, full width half-maximum, and intensity. We find the Raman D-band frequency increases for hole doping and decreases for electron doping. The Raman D-band intensity increases when the Fermi level approaches half of the excitation energy and is higher in the case of electron doping than that of hole doping. These variations can be explained by electron-phonon interaction theory and quantum interference between different Raman pathways in graphene. The intensity ratio of Raman D- and G-band, which is important for defects characterization in graphene, shows a strong dependence on carrier density.
Generation of subnanosecond electron beams in air at atmospheric pressure
NASA Astrophysics Data System (ADS)
Kostyrya, I. D.; Tarasenko, V. F.; Baksht, E. Kh.; Burachenko, A. G.; Lomaev, M. I.; Rybka, D. V.
2009-11-01
Optimum conditions for the generation of runaway electron beams with maximum current amplitudes and densities in nanosecond pulsed discharges in air at atmospheric pressure are determined. A supershort avalanche electron beam (SAEB) with a current amplitude of ˜30 A, a current density of ˜20 A/cm2, and a pulse full width at half maximum (FWHM) of ˜100 ps has been observed behind the output foil of an air-filled diode. It is shown that the position of the SAEB current maximum relative to the voltage pulse front exhibits a time shift that varies when the small-size collector is moved over the foil surface.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.
NASA Astrophysics Data System (ADS)
Casas, Albert; Cosentino, Pietro L.; Fiandaca, Gianluca; Himi, Mahjoub; Macias, Josep M.; Martorana, Raffaele; Muñoz, Andreu; Rivero, Lluís; Sala, Roger; Teixell, Imma
2018-04-01
An integrated geophysical survey has been conducted at the Tarragona's Cathedral (Catalonia, NE Spain) with the aim to confirm the potential occurrence of archaeological remains of the Roman Temple dedicated to the Emperor Augustus. Many hypotheses have been proposed about its possible location, the last ones regarding the inner part of the Cathedral, which is one of the most renowned temples of Spain (twelfth century) evolving from Romanesque to Gothic styles. A geophysical project including electrical resistivity tomography (ERT) and ground probing radar (GPR) was planned over 1 year considering the administrative and logistic difficulties of such a project inside a cathedral of religious veneration. Finally, both ERT and GPR have been conducted during a week of intensive overnight surveys that provided detailed information on subsurface existing structures. The ERT method has been applied using different techniques and arrays, ranging from standard Wenner-Schlumberger 2D sections to full 3D electrical imaging with the advanced Maximum Yield Grid array. Electrical resistivity data were recorded extensively, making available many thousands of apparent resistivity data to obtain a complete 3D image after a full inversion. In conclusion, some significant buried structures have been revealed providing conclusive information for archaeologists. GPR results provided additional information about shallowest structures. The geophysical results were clear enough to persuade religious authorities and archaeologists to conduct selected excavations in the most promising areas that confirmed the interpretation of geophysical data. In conclusion, the significant buried structures revealed by geophysical methods under the cathedral were confirmed by archaeological digging as the basement of the impressive Roman Temple that headed the Provincial Forum of Tarraco, seat of the Concilium of Hispania Citerior Province.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Design and Use of Microphone Directional Arrays for Aeroacoustic Measurements
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Brooks, Thomas F.; Hunter, William W., Jr.; Meadows, Kristine R.
1998-01-01
An overview of the development of two microphone directional arrays for aeroacoustic testing is presented. These arrays were specifically developed to measure airframe noise in the NASA Langley Quiet Flow Facility. A large aperture directional array using 35 flush-mounted microphones was constructed to obtain high resolution noise localization maps around airframe models. This array possesses a maximum diagonal aperture size of 34 inches. A unique logarithmic spiral layout design was chosen for the targeted frequency range of 2-30 kHz. Complementing the large array is a small aperture directional array, constructed to obtain spectra and directivity information from regions on the model. This array, possessing 33 microphones with a maximum diagonal aperture size of 7.76 inches, is easily moved about the model in elevation and azimuth. Custom microphone shading algorithms have been developed to provide a frequency- and position-invariant sensing area from 10-40 kHz with an overall targeted frequency range for the array of 5-60 kHz. Both arrays are employed in acoustic measurements of a 6 percent of full scale airframe model consisting of a main element NACA 632-215 wing section with a 30 percent chord half-span flap. Representative data obtained from these measurements is presented, along with details of the array calibration and data post-processing procedures.
NASA Astrophysics Data System (ADS)
Ciuzas, Darius; Prasauskas, Tadas; Krugly, Edvinas; Sidaraviciute, Ruta; Jurelionis, Andrius; Seduikyte, Lina; Kauneliene, Violeta; Wierzbicka, Aneta; Martuzevicius, Dainius
2015-10-01
The study presents the characterization of dynamic patterns of indoor particulate matter (PM) during various pollution episodes for real-time IAQ management. The variation of PM concentrations was assessed for 20 indoor activities, including cooking related sources, other thermal sources, personal care and household products. The pollution episodes were modelled in full-scale test chamber representing a standard usual living room with the forced ventilation of 0.5 h-1. In most of the pollution episodes, the maximum concentration of particles in exhaust air was reached within a few minutes. The most rapid increase in particle concentration was during thermal source episodes such as candle, cigarette, incense stick burning and cooking related sources, while the slowest decay of concentrations was associated with sources, emitting ultrafine particle precursors, such as furniture polisher spraying, floor wet mopping with detergent etc. Placement of the particle sensors in the ventilation exhaust vs. in the centre of the ceiling yielded comparable results for both measured maximum concentrations and temporal variations, indicating that both locations were suitable for the placement of sensors for the management of IAQ. The obtained data provides information that may be utilized considering measurements of aerosol particles as indicators for the real-time management of IAQ.
Molecular and Clinical Characterization of Chikungunya Virus Infections in Southeast Mexico
Martínez-Landeros, Erik; Delgado-Gallegos, Juan L.; Caballero-Sosa, Sandra; Malo-García, Iliana R.
2018-01-01
Chikungunya fever is an arthropod-borne infection caused by Chikungunya virus (CHIKV). Even though clinical features of Chikungunya fever in the Mexican population have been described before, there is no detailed information. The aim of this study was to perform a full description of the clinical features in confirmed Chikungunya-infected patients and describe the molecular epidemiology of CHIKV. We evaluated febrile patients who sought medical assistance in Tapachula, Chiapas, Mexico, from June through July 2015. Infection was confirmed with molecular and serological methods. Viruses were isolated and the E1 gene was sequenced. Phylogeny reconstruction was inferred using maximum-likelihood and maximum clade credibility approaches. We studied 52 patients with confirmed CHIKV infection. They were more likely to have wrist, metacarpophalangeal, and knee arthralgia. Two combinations of clinical features were obtained to differentiate between Chikungunya fever and acute undifferentiated febrile illness. We obtained 10 CHIKV E1 sequences that grouped with the Asian lineage. Seven strains diverged from the formerly reported. Patients infected with the divergent CHIKV strains showed a broader spectrum of clinical manifestations. We defined the complete clinical features of Chikungunya fever in patients from Southeastern Mexico. Our results demonstrate co-circulation of different CHIKV strains in the state of Chiapas. PMID:29747416
Behavior of Shape Memory Epoxy Foams in Microgravity: Experimental Results of STS-134 Mission
NASA Astrophysics Data System (ADS)
Santo, Loredana; Quadrini, Fabrizio; Squeo, Erica Anna; Dolce, Ferdinando; Mascetti, Gabriele; Bertolotto, Delfina; Villadei, Walter; Ganga, Pier Luigi; Zolesi, Valfredo
2012-09-01
Shape memory epoxy foams were used for an experiment on the International Space Station to evaluate the feasibility of their use for building multi-functional composite structures. A small equipment was designed and built to simulate the actuation of simple devices in micro-gravity conditions: three different configurations (compression, bending and torsion) were chosen during the memory step of the foams so as to produce their recovery on ISS. Two systems were used for the experimentation to avoid damages of the flight model during laboratory tests; however a single ground experiment was performed also on the flight model before the mission. Micro-gravity does not affect the ability of the foams to recover their shape but it poses strong limits for the heating system design because of the difference in heat transfer on earth and in orbit. A full recovery of the foam samples was not achieved due to some limitations in the maximum allowable temperature on ISS for safety reasons: anyway a 70% recovery was also measured at a temperature of 110°C. Ground laboratory experiments showed that 100% recovery could be reached by increasing the maximum temperature to 120°C. Experiment results have provided many useful information for the designing of a new structural composite actuator by using shape memory foams.
Gutierrez, Laura; Romero, Iris B.; Moyano, Daniela L.; Poggio, Rosana; Calandrelli, Matías; Mores, Nora; Rubinstein, Adolfo; Irazola, Vilma
2017-01-01
The maximum content of sodium in selected processed foods (PF) in Argentina was limited by a law enacted in 2013. Data about intake of these and other foods are necessary for policy planning, implementation, evaluation, and monitoring. We examined data from the CESCAS I population-based cohort study to assess the main dietary sources among PF and frequency of discretionary salt use by sex, age, and education attainment, before full implementation of the regulations in 2015. We used a validated 34-item FFQ (Food Frequency Questionnaire) to assess PF intake and discretional salt use. Among 2127 adults in two Argentinean cities, aged 35–76 years, mean salt intake from selected PFs was 4.7 g/day, higher among male and low education subgroups. Categories of foods with regulated maximum limits provided near half of the sodium intake from PFs. Use of salt (always/often) at the table and during cooking was reported by 9% and 73% of the population, respectively, with higher proportions among young people. Reducing salt consumption to the target of 5 g/day may require adjustments to the current regulation (reducing targets, including other food categories), as well as reinforcing strategies such as education campaigns, labeling, and voluntary agreement with bakeries. PMID:28858263
First arrival time picking for microseismic data based on DWSW algorithm
NASA Astrophysics Data System (ADS)
Li, Yue; Wang, Yue; Lin, Hongbo; Zhong, Tie
2018-03-01
The first arrival time picking is a crucial step in microseismic data processing. When the signal-to-noise ratio (SNR) is low, however, it is difficult to get the first arrival time accurately with traditional methods. In this paper, we propose the double-sliding-window SW (DWSW) method based on the Shapiro-Wilk (SW) test. The DWSW method is used to detect the first arrival time by making full use of the differences between background noise and effective signals in the statistical properties. Specifically speaking, we obtain the moment corresponding to the maximum as the first arrival time of microseismic data when the statistic of our method reaches its maximum. Hence, in our method, there is no need to select the threshold, which makes the algorithm more facile when the SNR of microseismic data is low. To verify the reliability of the proposed method, a series of experiments is performed on both synthetic and field microseismic data. Our method is compared with the traditional short-time and long-time average (STA/LTA) method, the Akaike information criterion, and the kurtosis method. Analysis results indicate that the accuracy rate of the proposed method is superior to that of the other three methods when the SNR is as low as - 10 dB.
Radiological effluents released from US continental tests, 1961 through 1992. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoengold, C.R.; DeMarre, M.E.; Kirkwood, E.M.
1996-08-01
This report documents all continental tests from September 15, 1961, through September 23, 1992, from which radioactive effluents were released. The report includes both updated information previously published in the publicly available May, 1990 report, DOE/NV-317, ``Radiological Effluents Released from Announced US Continental Tests 1961 through 1988``, and effluent release information on formerly unannounced tests. General information provided for each test includes the date, time, location, type of test, sponsoring laboratory and/or agency or other sponsor, depth of burial, purpose, yield or yield range, extent of release (onsite only or offsite), and category of release (detonation-time versus post-test operations). Wheremore » a test with simultaneous detonations is listed, location, depth of burial and yield information are given for each detonation if applicable, as well as the specific source of the release. A summary of each release incident by type of release is included. For a detonation-time release, the effluent curies are expressed at R+12 hours. For a controlled releases from tunnel-tests, the effluent curies are expressed at both time of release and at R+12 hours. All other types are listed at the time of the release. In addition, a qualitative statement of the isotopes in the effluent is included for detonation-time and controlled releases and a quantitative listing is included for all other types. Offsite release information includes the cloud direction, the maximum activity detected in the air offsite, the maximum gamma exposure rate detected offsite, the maximum iodine level detected offsite, and the maximum distance radiation was detected offsite. A release summary incudes whatever other pertinent information is available for each release incident. This document includes effluent release information for 433 tests, some of which have simultaneous detonations. However, only 52 of these are designated as having offsite releases.« less
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
48 CFR 39.103 - Modular contracting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CATEGORIES OF CONTRACTING ACQUISITION OF INFORMATION TECHNOLOGY General 39.103 Modular contracting. (a) This section implements Section 5202, Incremental Acquisition of Information Technology, of the Clinger-Cohen... technology. Consistent with the agency's information technology architecture, agencies should, to the maximum...
Regularized maximum pure-state input-output fidelity of a quantum channel
NASA Astrophysics Data System (ADS)
Ernst, Moritz F.; Klesse, Rochus
2017-12-01
As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.
NASA Astrophysics Data System (ADS)
Chew, Z. J.; Zhu, M.
2015-12-01
A maximum power point tracking (MPPT) scheme by tracking the open-circuit voltage from a piezoelectric energy harvester using a differentiator is presented in this paper. The MPPT controller is implemented by using a low-power analogue differentiator and comparators without the need of a sensing circuitry and a power hungry controller. This proposed MPPT circuit is used to control a buck converter which serves as a power management module in conjunction with a full-wave bridge diode rectifier. Performance of this MPPT control scheme is verified by using the prototyped circuit to track the maximum power point of a macro-fiber composite (MFC) as the piezoelectric energy harvester. The MFC was bonded on a composite material and the whole specimen was subjected to various strain levels at frequency from 10 to 100 Hz. Experimental results showed that the implemented full analogue MPPT controller has a tracking efficiency between 81% and 98.66% independent of the load, and consumes an average power of 3.187 μW at 3 V during operation.
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
32 CFR 286.22 - General provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF INFORMATION ACT PROGRAM DOD FREEDOM OF INFORMATION ACT PROGRAM REGULATION Release and Processing Procedures § 286.22 General provisions. (a) Public information. (1) Since the policy of the Department of Defense is to make the maximum amount of information available to the public consistent with its other...
Boerner, Katelynn E; Noel, Melanie; Birnie, Kathryn A; Caes, Line; Petter, Mark; Chambers, Christine T
2016-07-01
The cold pressor task (CPT) is increasingly used to induce experimental pain in children, but the specific methodology of the CPT is quite variable across pediatric studies. This study examined how subtle variations in CPT methodology (eg. provision of low- or high-threat information regarding the task; provision or omission of maximum immersion time) may influence children's and parents' perceptions of the pain experience. Forty-eight children (8 to 14 years) and their parents were randomly assigned to receive information about the CPT that varied on 2 dimensions, prior to completing the task: (i) threat level: high-threat (task described as very painful, high pain expressions depicted) or low-threat (standard CPT instructions provided, low pain expressions depicted); (ii) ceiling: informed (provided maximum immersion time) or uninformed (information about maximum immersion time omitted). Parents and children in the high-threat condition expected greater child pain, and these children reported higher perceived threat of pain and state pain catastrophizing. For children in the low-threat condition, an informed ceiling was associated with less state pain catastrophizing during the CPT. Pain intensity, tolerance, and fear during the CPT did not differ by experimental group, but were predicted by child characteristics. Findings suggest that provision of threatening information may impact anticipatory outcomes, but experienced pain was better explained by individual child variables. © 2015 World Institute of Pain.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy
Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao
2016-01-01
Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000
Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons.
Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J; Mitter, Sanjoy K
2014-10-01
We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-13
... DEPARTMENT OF EDUCATION Office of Special Education and Rehabilitative Services; Overview... Secretary for Special Education and Rehabilitative Services may change the maximum amount through a notice... Secretary for Special Education and Rehabilitative Services may change the maximum project period through a...
The Anatomy of AP1000 Mono-Block Low Pressure Rotor Forging
NASA Astrophysics Data System (ADS)
Jin, Jia-yu; Rui, Shou-tai; Wang, Qun
AP1000 mono-block low pressure (LP) rotor forgings for nuclear power station have maximum ingot weight, maximum diameter and the highest technical requirements. It confronts many technical problems during manufacturing process such as composition segregation and control of inclusion in the large ingot, core compaction during forging, control of grain size and mechanical performance. The rotor forging were anatomized to evaluate the manufacturing level of CFHI. This article introduces the anatomical results of this forging. The contents include chemical composition, mechanical properties, inclusions and grain size and other aspects from the full-length and full cross-section of this forging. The fluctuation of mechanical properties, uniformity of microstructure and purity of chemical composition were emphasized. The results show that the overall performance of this rotor forging is particularly satisfying.
EXERGY AND FISHER INFORMATION AS ECOLOGICAL INDEXES
Ecological indices are used to provide summary information about a particular aspect of ecosystem behavior. Many such indices have been proposed and here we investigate two: exergy and Fisher Information. Exergy, a thermodynamically based index, is a measure of maximum amount o...
48 CFR 245.7304 - Informal bid procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Inventory 245.7304 Informal bid procedures. (a) Upon approval of the plant clearance officer, the contractor...— (1) Maximum practical competition is maintained; (2) Sources solicited are recorded; and (3) Informal... plant clearance officer prior to soliciting bids from other prospective bidders. ...
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Fragrance materials such as synthetic musks in aqueous samples, are normally determined by gas chromatography/mass spectrometry in the selected ion monitoring (SIM) mode to provide maximum sensitivity after liquid-liquid extraction of I -L samples. Full-scan mass spectra are requ...
NASA Technical Reports Server (NTRS)
Land, Norman S.; Zeck, Howard
1947-01-01
Tests of a 1/7 size model of the Grumman XJR2F-1 amphibian were made in Langley tank no.1 to examine the landing behavior in rough water and to measure the normal and angular accelerations experienced by the model during these landings. All landings were made normal to the direction of wave advance, a condition assumed to produce the greatest accelerations. Wave heights of 4.4 and 8.0 inches (2.5 and 4.7 ft, full size) were used in the tests and the wave lengths were varied between 10 and 50 feet (70 and 350 ft, full size). Maximum normal accelerations of about 6.5g were obtained in 4.4 inch waves and 8.5g were obtained in 8.0 inch waves. A maximum angular acceleration corresponding to 16 radians per second per second, full size, was obtained in the higher waves. The data indicate that the airplane will experience its greatest accelerations when landing in waves of about 20 feet (140 ft, full size) in length.
NASA Technical Reports Server (NTRS)
Noonan, K. W.; Bingham, G. J.
1980-01-01
An investigation was conducted in the Langely 6 by 28 inch transonic tunnel to determine the two dimensional aerodynamic characteristics of three helicopter rotor airfoils at Reynolds numbers from typical model scale to full scale at Mach numbers from about 0.35 to 0.90. The model scale Reynolds numbers ranged from about 700,00 to 1,500,000 and the full scale Reynolds numbers ranged from about 3,000,000 to 6,600,000. The airfoils tested were the NACA 0012 (0 deg Tab), the SC 1095 R8, and the SC 1095. Both the SC 1095 and the SC 1095 R8 airfoils had trailing edge tabs. The results of this investigation indicate that Reynolds number effects can be significant on the maximum normal force coefficient and all drag related parameters; namely, drag at zero normal force, maximum normal force drag ratio, and drag divergence Mach number. The increments in these parameters at a given Mach number owing to the model scale to full scale Reynolds number change are different for each of the airfoils.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
Sequencing and analysis of 10967 full-length cDNA clones from Xenopus laevis and Xenopus tropicalis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morin, R D; Chang, E; Petrescu, A
2005-10-31
Sequencing of full-insert clones from full-length cDNA libraries from both Xenopus laevis and Xenopus tropicalis has been ongoing as part of the Xenopus Gene Collection initiative. Here we present an analysis of 10967 clones (8049 from X. laevis and 2918 from X. tropicalis). The clone set contains 2013 orthologs between X. laevis and X. tropicalis as well as 1795 paralog pairs within X. laevis. 1199 are in-paralogs, believed to have resulted from an allotetraploidization event approximately 30 million years ago, and the remaining 546 are likely out-paralogs that have resulted from more ancient gene duplications, prior to the divergence betweenmore » the two species. We do not detect any evidence for positive selection by the Yang and Nielsen maximum likelihood method of approximating d{sub N}/d{sub S}. However, d{sub N}/d{sub S} for X. laevis in-paralogs is elevated relative to X. tropicalis orthologs. This difference is highly significant, and indicates an overall relaxation of selective pressures on duplicated gene pairs. Within both groups of paralogs, we found evidence of subfunctionalization, manifested as differential expression of paralogous genes among tissues, as measured by EST information from public resources. We have observed, as expected, a higher instance of subfunctionalization in out-paralogs relative to in-paralogs.« less
Entropy-based goodness-of-fit test: Application to the Pareto distribution
NASA Astrophysics Data System (ADS)
Lequesne, Justine
2013-08-01
Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2011 CFR
2011-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2012 CFR
2012-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
40 CFR 1039.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...
40 CFR 1039.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...
40 CFR 1039.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...
40 CFR 1039.140 - What is my engine's maximum engine power?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...
Code of Federal Regulations, 2010 CFR
2010-10-01
... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...
Code of Federal Regulations, 2014 CFR
2014-10-01
... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...
Code of Federal Regulations, 2012 CFR
2012-10-01
... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...
Code of Federal Regulations, 2013 CFR
2013-10-01
... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...
Code of Federal Regulations, 2011 CFR
2011-10-01
... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...
Nagaraja, S; Soorya Prakash, K; Sudhakaran, R; Sathish Kumar, M
2016-12-01
This paper deals with emission quality of diesel engine based on eco toxicological studies with different methods of environmental standard toxicity tests satisfy the Bharath and European emission norms. Based on the emission norms, Corn Oil Methyl Ester (COME) with diesel is tested in a compression ignition engine and the performance and combustion characteristics are discussed. The corn oil was esterified and the property of corn oil methyl ester was within the limits specified in ASTM D 6751-03. The COME was blended together with diesel in different proportion percentages along with B20, B40, B60, B80, and B100. The emission and performance tests for various blends of COME was carried out using single cylinder, four stroke diesel engine, and compared with the performance obtained with 100% diesel (D100). The results give clear information that COME has low exhaust emissions and increase in performance compared to D100 without any modifications. It gives better performance, which is nearer to the obtained results of D100. Specific Fuel Consumption (SFC) of B100 at the full load condition is found to be 4% lower than that of (D100). The maximum Brake Thermal Efficiency (BTE) of B100 is found to be 8.5% higher than that of the D100 at full load. Also, the maximum BTE of part load for different blends is varied from 5.9% to 7.45% which is higher than D100. The exhaust gas emissions like Carbon Monoxide (CO), Carbon Dioxide (CO 2 ), Hydro Carbon (HC) and Nitrogen Oxide (NO x ) are found to be 2.3 to 18.8% lower compared to D100 for part as well as full load. The heat release rate of biodiesel and it blends are found to 16% to 35% lower as compared to D100 for part load, where as for full load it is 21% lower than D100. The results showed that the test of emissions norms are well within the limits of Bharath VI and European VI and it leads to less pollution, less effect on green eco system and potential substitute to fossil fuels. Copyright © 2016 Elsevier Inc. All rights reserved.
Full-Field Accommodation in Rhesus Monkeys Measured Using Infrared Photorefraction
He, Lin; Wendt, Mark
2012-01-01
Purpose. Full-field photorefraction was measured during accommodation in anesthetized monkeys to better understand the monkey as a model of human accommodation and how accommodation affects off-axis refraction. Methods. A photorefraction camera was rotated on a 30-cm-long rod in a horizontal arc, with the eye at the center of curvature of the arc so that the measurement distance remained constant. The resistance of a potentiometer attached to the rotation center of the rod changed proportionally with the rotation angle. Photorefraction and rotation angle were simultaneously measured at 30 Hz. Trial-lens calibrations were performed on-axis and across the full field in each eye. Full-field refraction measurements were compared using on-axis and full-field calibrations. In five iridectomized monkeys (mean age in years ± SD: 12.8 ± 0.9), full-field refraction was measured before and during carbachol iontophoresis stimulated accommodation, a total of seven times (with one repeat each in two monkeys). Results. Measurements over approximately 20 seconds had <0.1 D of variance and an angular resolution of 0.1°, from at least −30° to 30°. Photorefraction calibrations performed over the full field had a maximum variation in the calibration slopes within one eye of 90%. Applying full-field calibrations versus on-axis calibrations resulted in a decrease in the maximum SDs of the calculated refractions from 1.99 to 0.89 D for relative peripheral refractive error and from 4.68 to 1.99 D for relative accommodation. Conclusions. By applying full-field calibrations, relative accommodation in pharmacologically stimulated monkeys was found to be similar to that reported with voluntary accommodation in humans. PMID:22125278
Variability of Arctic Sea Ice as Determined from Satellite Observations
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.
1999-01-01
The compiled, quality-controlled satellite multichannel passive-microwave record of polar sea ice now spans over 18 years, from November 1978 through December 1996, and is revealing considerable information about the Arctic sea ice cover and its variability. The information includes data on ice concentrations (percent areal coverages of ice), ice extents, ice melt, ice velocities, the seasonal cycle of the ice, the interannual variability of the ice, the frequency of ice coverage, and the length of the sea ice season. The data reveal marked regional and interannual variabilities, as well as some statistically significant trends. For the north polar ice cover as a whole, maximum ice extents varied over a range of 14,700,000 - 15,900,000 sq km, while individual regions experienced much greater percent variations, for instance, with the Greenland Sea having a range of 740,000 - 1,110,000 sq km in its yearly maximum ice coverage. In spite of the large variations from year to year and region to region, overall the Arctic ice extents showed a statistically significant, 2.80% / decade negative trend over the 18.2-year period. Ice season lengths, which vary from only a few weeks near the ice margins to the full year in the large region of perennial ice coverage, also experienced interannual variability, along with spatially coherent overall trends. Linear least squares trends show the sea ice season to have lengthened in much of the Bering Sea, Baffin Bay, the Davis Strait, and the Labrador Sea, but to have shortened over a much larger area, including the Sea of Okhotsk, the Greenland Sea, the Barents Sea, and the southeastern Arctic.
NASA Glenn Icing Research Tunnel: 2014 and 2015 Cloud Calibration Procedures and Results
NASA Technical Reports Server (NTRS)
Steen, Laura E.; Ide, Robert F.; Van Zante, Judith F.; Acosta, Waldo J.
2015-01-01
This report summarizes the current status of the NASA Glenn Research Center (GRC) Icing Research Tunnel cloud calibration: specifically, the cloud uniformity, liquid water content, and drop-size calibration results from both the January-February 2014 full cloud calibration and the January 2015 interim cloud calibration. Some aspects of the cloud have remained the same as what was reported for the 2014 full calibration, including the cloud uniformity from the Standard nozzles, the drop-size equations for Standard and Mod1 nozzles, and the liquid water content for large-drop conditions. Overall, the tests performed in January 2015 showed good repeatability to 2014, but there is new information to report as well. There have been minor updates to the Mod1 cloud uniformity on the north side of the test section. Also, successful testing with the OAP-230Y has allowed the IRT to re-expand its operating envelopes for large-drop conditions to a maximum median volumetric diameter of 270 microns. Lastly, improvements to the collection-efficiency correction for the SEA multi-wire have resulted in new calibration equations for Standard- and Mod1-nozzle liquid water content.
NASA Astrophysics Data System (ADS)
Schubert, Jochen E.; Burns, Matthew J.; Fletcher, Tim D.; Sanders, Brett F.
2017-10-01
This research outlines a framework for the case-specific assessment of Green Infrastructure (GI) performance in mitigating flood hazard in small urban catchments. The urban hydrologic modeling tool (MUSIC) is coupled with a fine resolution 2D hydrodynamic model (BreZo) to test to what extent retrofitting an urban watershed with GI, rainwater tanks and infiltration trenches in particular, can propagate flood management benefits downstream and support intuitive flood hazard maps useful for communicating and planning with communities. The hydrologic and hydraulic models are calibrated based on current catchment conditions, then modified to represent alternative GI scenarios including a complete lack of GI versus a full implementation of GI. Flow in the hydrologic/hydraulic models is forced using a range of synthetic rainfall events with annual exceedance probabilities (AEPs) between 1-63% and durations from 10 min to 24 h. Flood hazard benefits mapped by the framework include maximum flood depths and extents, flow intensity (m2/s), flood duration, and critical storm duration leading to maximum flood conditions. Application of the system to the Little Stringybark Creek (LSC) catchment shows that across the range of AEPs tested and for storm durations equal or less than 3 h, presently implemented GI reduces downstream flooded area on average by 29%, while a full implementation of GI would reduce downstream flooded area on average by 91%. A full implementation of GI could also lower maximum flow intensities by 83% on average, reducing the drowning hazard posed by urban streams and improving the potential for access by emergency responders. For storm durations longer than 3 h, a full implementation of GI lacks the capacity to retain the resulting rainfall depths and only reduces flooded area by 8% and flow intensity by 5.5%.
Li, Jianying; Fok, Alex S L; Satterthwaite, Julian; Watts, David C
2009-05-01
The aim of this study was to measure the full-field polymerization shrinkage of dental composites using optical image correlation method. Bar specimens of cross-section 4mm x 2mm and length 10mm approximately were light cured with two irradiances, 450 mW/cm(2) and 180 mW/cm(2), respectively. The curing light was generated with Optilux 501 (Kerr) and the two different irradiances were achieved by adjusting the distance between the light tip and the specimen. A single-camera 2D measuring system was used to record the deformation of the composite specimen for 30 min at a frequency of 0.1 Hz. The specimen surface under observation was sprayed with paint to produce sufficient contrast to allow tracking of individual points on the surface. The curing light was applied to one end of the specimen for 40s during which the painted surface was fully covered. After curing, the cover was removed immediately so that deformation of the painted surface could be recorded by the camera. The images were then analyzed with specialist software and the volumetric shrinkage determined along the beam length. A typical shrinkage strain field obtained on a specimen surface was highly non-uniform, even at positions of constant distance from the irradiation surface, indicating possible heterogeneity in material composition and shrinkage behavior in the composite. The maximum volumetric shrinkage strain of approximately 1.5% occurred at a subsurface distance of about 1mm, instead of at the irradiation surface. After reaching its peak value, the shrinkage strain then gradually decreased with increasing distance along the beam length, before leveling off to a value of approximately 0.2% at a distance of 4-5mm. The maximum volumetric shrinkage obtained agreed well with the value of 1.6% reported by the manufacturer for the composite examined in this work. Using irradiance of 180 mW/cm(2) resulted in only slightly less polymerization shrinkage than using irradiance of 450 mW/cm(2). Compared to the other measurement methods, the image correlation method is capable of producing full-field information about the polymerization shrinkage behavior of dental composites.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Information flow in layered networks of non-monotonic units
NASA Astrophysics Data System (ADS)
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Wallace, Chris; Xue, Ming-Zhan; Newhouse, Stephen J.; Marçano, Ana Carolina B.; Onipinla, Abiodun K.; Burke, Beverley; Gungadoo, Johannie; Dobson, Richard J.; Brown, Morris; Connell, John M.; Dominiczak, Anna; Lathrop, G. Mark; Webster, John; Farrall, Martin; Mein, Charles; Samani, Nilesh J.; Caulfield, Mark J.; Clayton, David G.; Munroe, Patricia B.
2006-01-01
Identification of the genetic influences on human essential hypertension and other complex diseases has proved difficult, partly because of genetic heterogeneity. In many complex-trait resources, additional phenotypic data have been collected, allowing comorbid intermediary phenotypes to be used to characterize more genetically homogeneous subsets. The traditional approach to analyzing covariate-defined subsets has typically depended on researchers’ previous expectations for definition of a comorbid subset and leads to smaller data sets, with a concomitant attrition in power. An alternative is to test for dependence between genetic sharing and covariates across the entire data set. This approach offers the advantage of exploiting the full data set and could be widely applied to complex-trait genome scans. However, existing maximum-likelihood methods can be prohibitively computationally expensive, especially since permutation is often required to determine significance. We developed a less computationally intensive score test and applied it to biometric and biochemical covariate data, from 2,044 sibling pairs with severe hypertension, collected by the British Genetics of Hypertension (BRIGHT) study. We found genomewide-significant evidence for linkage with hypertension and several related covariates. The strongest signals were with leaner-body-mass measures on chromosome 20q (maximum LOD=4.24) and with parameters of renal function on chromosome 5p (maximum LOD=3.71). After correction for the multiple traits and genetic locations studied, our global genomewide P value was .046. This is the first identity-by-descent regression analysis of hypertension to our knowledge, and it demonstrates the value of this approach for the incorporation of additional phenotypic information in genetic studies of complex traits. PMID:16826522
NASA Astrophysics Data System (ADS)
Nichols, K. A.; Johnson, J.; Goehring, B. M.; Balco, G.
2017-12-01
We present a suite of in situ 14C cosmogenic nuclide exposure ages from nunataks at the Lassiter Coast in West Antarctica on the west side of the Weddell Sea Embayment (WSE) to constrain the thinning history of the Ronne-Filchner Ice Shelf. Constraints on past ice extents in the WSE remain relatively understudied, despite the WSE draining 22% of the Antarctic Ice Sheet (AIS). Information lacking includes unambiguous geological evidence for the maximum Last Glacial Maximum (LGM) ice thickness and the timing of subsequent ice retreat in key peripheral locations. Past studies using long-lived cosmogenic nuclides have shown that, due to the cold-based nature of the AIS, inheritance of nuclide concentrations from previous periods of exposure is a common problem. We utilised the cosmogenic nuclide 14C to circumvent the issue of inheritance. The short half-life of 14C means measured concentrations are largely insensitive to inheritance, as relatively short periods of ice cover (20-30 kyr) result in significant 14C decay. Furthermore, samples saturated in 14C will demonstrate that their location was above the maximum LGM thickness of the ice sheet and exposed for at least the past ca. 35 kyr. Preliminary results from four samples indicate elevations between 63 and 360 m above the present-day ice surface elevations were deglaciated between 7 and 6 ka. With little exposed rock above these elevations (ca. 70 m), this may indicate that the locality was entirely covered by ice during the LGM. Additional 14C measurements will form a full elevation transect of samples to decipher the post-LGM thinning history of ice at this location.
Wallace, Chris; Xue, Ming-Zhan; Newhouse, Stephen J; Marcano, Ana Carolina B; Onipinla, Abiodun K; Burke, Beverley; Gungadoo, Johannie; Dobson, Richard J; Brown, Morris; Connell, John M; Dominiczak, Anna; Lathrop, G Mark; Webster, John; Farrall, Martin; Mein, Charles; Samani, Nilesh J; Caulfield, Mark J; Clayton, David G; Munroe, Patricia B
2006-08-01
Identification of the genetic influences on human essential hypertension and other complex diseases has proved difficult, partly because of genetic heterogeneity. In many complex-trait resources, additional phenotypic data have been collected, allowing comorbid intermediary phenotypes to be used to characterize more genetically homogeneous subsets. The traditional approach to analyzing covariate-defined subsets has typically depended on researchers' previous expectations for definition of a comorbid subset and leads to smaller data sets, with a concomitant attrition in power. An alternative is to test for dependence between genetic sharing and covariates across the entire data set. This approach offers the advantage of exploiting the full data set and could be widely applied to complex-trait genome scans. However, existing maximum-likelihood methods can be prohibitively computationally expensive, especially since permutation is often required to determine significance. We developed a less computationally intensive score test and applied it to biometric and biochemical covariate data, from 2,044 sibling pairs with severe hypertension, collected by the British Genetics of Hypertension (BRIGHT) study. We found genomewide-significant evidence for linkage with hypertension and several related covariates. The strongest signals were with leaner-body-mass measures on chromosome 20q (maximum LOD = 4.24) and with parameters of renal function on chromosome 5p (maximum LOD = 3.71). After correction for the multiple traits and genetic locations studied, our global genomewide P value was .046. This is the first identity-by-descent regression analysis of hypertension to our knowledge, and it demonstrates the value of this approach for the incorporation of additional phenotypic information in genetic studies of complex traits.
A source with a 10{sup 13} DT neutron yield on the basis of a spherical plasma focus chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavyalov, N. V.; Maslov, V. V.; Rumyantsev, V. G., E-mail: rumyantsev@expd.vniief.ru
2013-03-15
Results from preliminary experimental research of neutron emission generated by a spherical plasma focus chamber filled with an equal-component deuterium-tritium mixture are presented. At a maximum current amplitude in the discharge chamber of {approx}1.5 MA, neutron pulses with a full width at half-maximum of 75-80 ns and an integral yield of {approx}1.3 Multiplication-Sign 10{sup 13} DT neutrons have been recorded.
Resch, K J; Walther, P; Zeilinger, A
2005-02-25
We have performed the first experimental tomographic reconstruction of a three-photon polarization state. Quantum state tomography is a powerful tool for fully describing the density matrix of a quantum system. We measured 64 three-photon polarization correlations and used a "maximum-likelihood" reconstruction method to reconstruct the Greenberger-Horne-Zeilinger state. The entanglement class has been characterized using an entanglement witness operator and the maximum predicted values for the Mermin inequality were extracted.
Development of 24GHz Rectenna for Receiving and Rectifying Modulated Waves
NASA Astrophysics Data System (ADS)
Shinohara, Naoki; Hatano, Ken
2014-11-01
In this paper, we show experimental results of RF-DC conversion with modulated 24GHz waves. We have already developed class-F MMIC rectenna with resonators for higher harmonics at no modulated 24GHz microwave for RF energy transfer. Dimensions of the MMIC rectifying circuit is 1 mm × 3 mm on GaAs. Maximum RF-DC conversion efficiency is measured 47.9% for a 210 mW microwave input of 24 GHz with a 120 Ω load. The class-F rectenna is based on a single shunt full-wave rectifier. For future application of a simultaneous energy and information transfer system or an energy harvesting from broadcasting waves, input microwave will be modulated. In this paper, we show an experimental result of RF-DC conversion of the class-F rectenna with 24GHz waves modulated by 16QAM as 1st modulation and OFDM as 2nd modulation.
Nigg, Claudio R; Motl, Robert W; Horwath, Caroline; Dishman, Rod K
2012-01-01
Objectives Physical activity (PA) research applying the Transtheoretical Model (TTM) to examine group differences and/or change over time requires preliminary evidence of factorial validity and invariance. The current study examined the factorial validity and longitudinal invariance of TTM constructs recently revised for PA. Method Participants from an ethnically diverse sample in Hawaii (N=700) completed questionnaires capturing each TTM construct. Results Factorial validity was confirmed for each construct using confirmatory factor analysis with full-information maximum likelihood. Longitudinal invariance was evidenced across a shorter (3-month) and longer (6-month) time period via nested model comparisons. Conclusions The questionnaires for each validated TTM construct are provided, and can now be generalized across similar subgroups and time points. Further validation of the provided measures is suggested in additional populations and across extended time points. PMID:22778669
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.
Thoemmes, Felix; Rose, Norman
2014-01-01
The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.
Lifetime costing of the body-in-white: Steel vs. aluminum
NASA Astrophysics Data System (ADS)
Han, Helen N.; Clark, Joel P.
1995-05-01
In order to make informed material choice decisions and to derive the maximum benefit from the use of alternative materials, the automobile producer must understand the full range of costs and benefits for each material. It is becoming clear that the conventional cost-benefit analysis structure currently used by the automotive industry must be broadened to include nontraditional costs such as the environmental externalities associated with the use of existing and potential automotive technologies. This article develops a methodology for comparing the costs and benefits associated with the use of alternative materials in automotive applications by focusing on steel and aluminum in the unibody body-in-white. Authors' Note: This is the first of two articles documenting a methodology for evaluating the lifetime monetary and environmental costs of alternative materials in automotive applications. This article addresses the traditional money costs while a subsequent paper, which is planned for the August issue, will address the environmental externalities.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Three-dimensional cross point readout detector design for including depth information
NASA Astrophysics Data System (ADS)
Lee, Seung-Jae; Baek, Cheol-Ha
2018-04-01
We designed a depth-encoding positron emission tomography (PET) detector using a cross point readout method with wavelength-shifting (WLS) fibers. To evaluate the characteristics of the novel detector module and the PET system, we used the DETECT2000 to perform optical photon transport in the crystal array. The GATE was also used. The detector module is made up of four layers of scintillator arrays, the five layers of WLS fiber arrays, and two sensor arrays. The WLS fiber arrays in each layer cross each other to transport light to each sensor array. The two sensor arrays are coupled to the forward and left sides of the WLS fiber array, respectively. The identification of three-dimensional pixels was determined using a digital positioning algorithm. All pixels were well decoded, with the system resolution ranging from 2.11 mm to 2.29 mm at full width at half maximum (FWHM).
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.
Phase-coherent engineering of electronic heat currents with a Josephson modulator
NASA Astrophysics Data System (ADS)
Fornieri, Antonio; Blanc, Christophe; Bosisio, Riccardo; D'Ambrosio, Sophie; Giazotto, Francesco
In this contribution we report the realization of the first balanced Josephson heat modulator designed to offer full control at the nanoscale over the phase-coherent component of electronic thermal currents. The ability to master the amount of heat transferred through two tunnel-coupled superconductors by tuning their phase difference is the core of coherent caloritronics, and is expected to be a key tool in a number of nanoscience fields, including solid state cooling, thermal isolation, radiation detection, quantum information and thermal logic. Our device provides magnetic-flux-dependent temperature modulations up to 40 mK in amplitude with a maximum of the flux-to-temperature transfer coefficient reaching 200 mK per flux quantum at a bath temperature of 25 mK. Foremost, it demonstrates the exact correspondence in the phase-engineering of charge and heat currents, breaking ground for advanced caloritronic nanodevices such as thermal splitters, heat pumps and time-dependent electronic engines.
Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita
2018-03-01
The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.
On thermal stress failure of the SNAP-19A RTG heat shield
NASA Technical Reports Server (NTRS)
Pitts, W. C.; Anderson, L. A.
1974-01-01
Results of a study on thermal stress problems in an amorphous graphite heat shield that is part of the launch-abort protect system for the SNAP-19A radio-isotope thermoelectric generators (RTG) that will be used on the Viking Mars Lander are presended. The first result is from a thermal stress analysis of a full-scale RTG heat source that failed to survive a suborbital entry flight test, possibly due to thermal stress failure. It was calculated that the maximum stress in the heat shield was only 50 percent of the ultimate strength of the material. To provide information on the stress failure criterion used for this calculation, some heat shield specimens were fractured under abort entry conditions in a plasma arc facility. It was found that in regions free of stress concentrations the POCO graphite heat shield material did fracture when the local stress reached the ultimate uniaxial stress of the material.
49 CFR 236.741 - Distance, stopping.
Code of Federal Regulations, 2010 CFR
2010-10-01
... portion of railroad at its maximum authorized speed, will travel during a full service application of the brakes, between the point where such application is initiated and the point where the train comes to a...
14 CFR 29.965 - Fuel tank tests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the reaction of the contents, with the tank full, during maximum limit acceleration or emergency... motion about more than one axis is likely to be critical, the tank must be rocked about each critical...
14 CFR 27.965 - Fuel tank tests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... reaction of the contents, with the tank full, during maximum limit acceleration or emergency deceleration... both sides of the horizontal (30 degrees total), about the most critical axis, for 25 hours. If motion...
14 CFR 29.965 - Fuel tank tests.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the reaction of the contents, with the tank full, during maximum limit acceleration or emergency... motion about more than one axis is likely to be critical, the tank must be rocked about each critical...
14 CFR 27.965 - Fuel tank tests.
Code of Federal Regulations, 2011 CFR
2011-01-01
... reaction of the contents, with the tank full, during maximum limit acceleration or emergency deceleration... both sides of the horizontal (30 degrees total), about the most critical axis, for 25 hours. If motion...
Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions
NASA Astrophysics Data System (ADS)
Xie, Jigang; Song, Wenyun
The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2014 CFR
2014-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2013 CFR
2013-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
Effects of Smoking on Respiratory Capacity and Control
ERIC Educational Resources Information Center
Awan, Shaheen N.; Alphonso, Vania A.
2007-01-01
The purpose of this study was to provide information concerning the possible early effects of smoking on measures of respiratory capacity and control in young adult female smokers vs. nonsmokers. In particular, maximum performance test results (vital capacity and maximum phonation time) and measures of air pressures and airflows during voiceless,…
RESONANT CAVITY EXCITATION SYSTEM
Baker, W.R.; Kerns, Q.A.; Riedel, J.
1959-01-13
An apparatus is presented for exciting a cavity resonator with a minimum of difficulty and, more specifically describes a sub-exciter and an amplifier type pre-exciter for the high-frequency cxcitation of large cavities. Instead of applying full voltage to the main oscillator, a sub-excitation voltage is initially used to establish a base level of oscillation in the cavity. A portion of the cavity encrgy is coupled to the input of the pre-exciter where it is amplified and fed back into the cavity when the pre-exciter is energized. After the voltage in the cavity resonator has reached maximum value under excitation by the pre-exciter, full voltage is applied to the oscillator and the pre-exciter is tunned off. The cavity is then excited to the maximum high voltage value of radio frequency by the oscillator.
Tracking the first two seconds: three stages of visual information processing?
Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa
2013-12-01
We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.
A systematic examination of the bone destruction pattern of the two-shot technique
Stoetzer, Marcus; Stoetzer, Carsten; Rana, Majeed; Zeller, Alexander; Hanke, Alexander; Gellrich, Nils-Claudius; von See, Constantin
2014-01-01
Introduction: The two-shot technique is an effective stopping power method. The precise mechanisms of action on the bone and soft-tissue structures of the skull; however, remain largely unclear. The aim of this study is to compare the terminal ballistics of the two-shot and single-shot techniques. Materials and Methods: 40 fresh pigs’ heads were randomly divided into 4 groups (n = 10). Either a single shot or two shots were fired at each head with a full metal jacket or a semi-jacketed bullet. Using thin-layer computed tomography and photography, the diameter of the destruction pattern and the fractures along the bullet path were then imaged and assessed. Results: A single shot fired with a full metal jacket bullet causes minor lateral destruction along the bullet path. With two shots fired with a full metal jacket bullet, however, the maximum diameter of the bullet path is significantly greater (P < 0.05) than it is with a single shot fired with a full metal jacket bullet. In contrast, the maximum diameter with a semi-jacketed bullet is similar with the single-shot and two-shot techniques. Conclusion: With the two-shot technique, a full metal jacket bullet causes a destruction pattern that is comparable to that of a single shot fired with a semi-jacketed bullet. PMID:24812454
Methods for Environments and Contaminants: Drinking Water
EPA’s Safe Drinking Water Information System Federal Version (SDWIS/FED) includes information on populations served and violations of maximum contaminant levels or required treatment techniques by the nation’s 160,000 public water systems.
The official websites of blood centers in China: A nationwide cross-sectional study.
Hu, Huiying; Wang, Jing; Zhu, Ming
2017-01-01
Blood collection agencies worldwide are facing ongoing and increasing medical demands for blood products. Many potential donors would search related information online before making decision of whether or not to donate blood. However, there is little knowledge of the online information and services provided by blood centers in China, despite the constantly increase of internet users. Our research investigates the number of blood centers' official websites and their quality, and highlights the deficiencies that required future advances. Identified official websites of blood centers were scored using a newly developed evaluation instrument with 42 items concerning technical aspects, information quality, information comprehensiveness and interactive services. Scores of websites were compared between blood centers with different level (provincial vs. regional blood centers) and location (blood centers located in economically developed vs. developing region). For the 253 working official websites all the 350 blood centers in China, and the mean overall score of websites was 24.7 out of 42. 79.1% websites were rated as fair (50-75% of maximum), 5.5% as good (≥75% of maximum) and 15.4% as poor(25-50% of maximum;). Websites got very low sub-scores in information quality (mean = 3.8; range 1-8; maximum = 9) and interactive services (3.3; 0-10; 10). Higher proportions of provincial (vs. regional) blood centers and economically developed (vs. developing) blood centers had official websites (p = 0.044 and p = 0.001; respectively) with better overall quality (p<0.001 and p <0.01) and better sub-scores (in all of the four sections and in technical aspects and information quality). Website overall scores was positively correlated with the number of people served by each blood center (p< 0.001) and the donation rate of each province (p = 0.046). This study suggests there is a need to further develop and improve official websites in China, especially for regional and inland blood centers. The poor information quality and interactive services provided by these websites is of particular concern, given the challenges in blood donor counselling and recruitment.
The official websites of blood centers in China: A nationwide cross-sectional study
Hu, Huiying; Wang, Jing
2017-01-01
Background Blood collection agencies worldwide are facing ongoing and increasing medical demands for blood products. Many potential donors would search related information online before making decision of whether or not to donate blood. However, there is little knowledge of the online information and services provided by blood centers in China, despite the constantly increase of internet users. Our research investigates the number of blood centers’ official websites and their quality, and highlights the deficiencies that required future advances. Methods Identified official websites of blood centers were scored using a newly developed evaluation instrument with 42 items concerning technical aspects, information quality, information comprehensiveness and interactive services. Scores of websites were compared between blood centers with different level (provincial vs. regional blood centers) and location (blood centers located in economically developed vs. developing region). Results For the 253 working official websites all the 350 blood centers in China, and the mean overall score of websites was 24.7 out of 42. 79.1% websites were rated as fair (50–75% of maximum), 5.5% as good (≥75% of maximum) and 15.4% as poor(25–50% of maximum;). Websites got very low sub-scores in information quality (mean = 3.8; range 1–8; maximum = 9) and interactive services (3.3; 0–10; 10). Higher proportions of provincial (vs. regional) blood centers and economically developed (vs. developing) blood centers had official websites (p = 0.044 and p = 0.001; respectively) with better overall quality (p<0.001 and p <0.01) and better sub-scores (in all of the four sections and in technical aspects and information quality). Website overall scores was positively correlated with the number of people served by each blood center (p< 0.001) and the donation rate of each province (p = 0.046). Conclusions This study suggests there is a need to further develop and improve official websites in China, especially for regional and inland blood centers. The poor information quality and interactive services provided by these websites is of particular concern, given the challenges in blood donor counselling and recruitment. PMID:28793324
A robot control formalism based on an information quality concept
NASA Technical Reports Server (NTRS)
Ekman, A.; Torne, A.; Stromberg, D.
1994-01-01
A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.
32 CFR 637.14 - Use of National Crime Information Center (NCIC).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Use of National Crime Information Center (NCIC). 637.14 Section 637.14 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY... Use of National Crime Information Center (NCIC). Provost marshals will make maximum use of NCIC...
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phan, Thien Q.; Levine, Lyle E.; Lee, I-Fang
Synchrotron X-ray microbeam diffraction was used to measure the full elastic long range internal strain and stress tensors of low dislocation density regions within the submicrometer grain/subgrain structure of equal-channel angular pressed (ECAP) aluminum alloy AA1050 after 1, 2, and 8 passes using route B C. This is the first time that full tensors were measured in plastically deformed metals at this length scale. The maximum (most tensile or least compressive) principal elastic strain directions for the unloaded 1 pass sample for the grain/subgrain interiors align well with the pressing direction, and are more random for the 2 and 8more » pass samples. The measurements reported here indicate that the local stresses and strains become increasingly isotropic (homogenized) with increasing ECAP passes using route BC. The average maximum (in magnitude) LRISs are -0.43 σ a for 1 pass, -0.44 σ a for 2 pass, and 0.14 σ a for the 8 pass sample. Furthermore, these LRISs are larger than those reported previously because those earlier measurements were unable to measure the full stress tensor. Significantly, the measured stresses are inconsistent with the two-component composite model.« less
Phan, Thien Q.; Levine, Lyle E.; Lee, I-Fang; ...
2016-04-23
Synchrotron X-ray microbeam diffraction was used to measure the full elastic long range internal strain and stress tensors of low dislocation density regions within the submicrometer grain/subgrain structure of equal-channel angular pressed (ECAP) aluminum alloy AA1050 after 1, 2, and 8 passes using route B C. This is the first time that full tensors were measured in plastically deformed metals at this length scale. The maximum (most tensile or least compressive) principal elastic strain directions for the unloaded 1 pass sample for the grain/subgrain interiors align well with the pressing direction, and are more random for the 2 and 8more » pass samples. The measurements reported here indicate that the local stresses and strains become increasingly isotropic (homogenized) with increasing ECAP passes using route BC. The average maximum (in magnitude) LRISs are -0.43 σ a for 1 pass, -0.44 σ a for 2 pass, and 0.14 σ a for the 8 pass sample. Furthermore, these LRISs are larger than those reported previously because those earlier measurements were unable to measure the full stress tensor. Significantly, the measured stresses are inconsistent with the two-component composite model.« less
Information models of software productivity - Limits on productivity growth
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.
78 FR 22798 - Hazardous Materials: Revision of Maximum and Minimum Civil Penalties
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
... ``Moving Ahead for Progress in the 21st Century Act'' (MAP-21), effective October 1, 2012, the maximum..., DC 20590-0001. SUPPLEMENTARY INFORMATION: I. Civil Penalty Amendments In Section 33010 of MAP-21 (Pub... Safety Administration. [FR Doc. 2013-08981 Filed 4-16-13; 8:45 am] BILLING CODE 4910-60-P ...
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
NASA Astrophysics Data System (ADS)
Salazar, Fernando; San-Mauro, Javier; Celigueta, Miguel Ángel; Oñate, Eugenio
2017-07-01
Dam bottom outlets play a vital role in dam operation and safety, as they allow controlling the water surface elevation below the spillway level. For partial openings, water flows under the gate lip at high velocity and drags the air downstream of the gate, which may cause damages due to cavitation and vibration. The convenience of installing air vents in dam bottom outlets is well known by practitioners. The design of this element depends basically on the maximum air flow through the air vent, which in turn is a function of the specific geometry and the boundary conditions. The intrinsic features of this phenomenon makes it hard to analyse either on site or in full scaled experimental facilities. As a consequence, empirical formulas are frequently employed, which offer a conservative estimate of the maximum air flow. In this work, the particle finite element method was used to model the air-water interaction in Susqueda Dam bottom outlet, with different gate openings. Specific enhancements of the formulation were developed to consider air-water interaction. The results were analysed as compared to the conventional design criteria and to information gathered on site during the gate operation tests. This analysis suggests that numerical modelling with the PFEM can be helpful for the design of this kind of hydraulic works.
Morales, Leonardo Fabio; Gordon-Larsen, Penny; Guilkey, David
2016-12-01
We estimate a structural dynamic model of the determinants of obesity. In addition to including many of the well-recognized endogenous factors mentioned in the literature as obesity determinants, we also model the individual's residential location as a choice variable, which is the main contribution of this paper to the literature. This allows us to control for an individual's self-selection into communities that possess the types of amenities in the built environment, which in turn affect their obesity-related behaviors such as physical activity (PA) and fast food consumption. We specify reduced form equations for a set of endogenous demand decisions, together with an obesity structural equation. The whole system of equations is jointly estimated by a semi-parametric full information log-likelihood method that allows for a general pattern of correlation in the errors across equations. Our model predicts a reduction in adult obesity of 7 percentage points as a result of a continued high level PA from adolescence into adulthood; a reduction of 0.7 (3) percentage points in adult obesity as a result of one standard deviation reduction in weekly fast food consumption for women (men); and a reduction of 0.02 (0.05) in adult obesity as a result of one standard deviation change in several neighborhood amenities for women (men). Another key finding is that controlling for residential self-selection has substantive implications. To our knowledge, this has not been yet documented within a full information maximum likelihood framework. Copyright © 2016 Elsevier B.V. All rights reserved.
Morales, Leonardo Fabio; Gordon-Larsen, Penny; Guilkey, David
2017-01-01
We estimate a structural dynamic model of the determinants of obesity. In addition to including many of the well-recognized endogenous factors mentioned in the literature as obesity determinants, we also model the individual’s residential location as a choice variable, which is the main contribution of this paper to the literature. This allows us to control for an individual's self-selection into communities that possess the types of amenities in the built environment, which in turn affect their obesity-related behaviors such as physical activity (PA) and fast food consumption. We specify reduced form equations for a set of endogenous demand decisions, together with an obesity structural equation. The whole system of equations is jointly estimated by a semi-parametric full information log-likelihood method that allows for a general pattern of correlation in the errors across equations. Our model predicts a reduction in adult obesity of 7 percentage points as a result of a continued high level PA from adolescence into adulthood; a reduction of 0.7 (3) percentage points in adult obesity as a result of one standard deviation reduction in weekly fast food consumption for women (men); and a reduction of 0.02 (0.05) in adult obesity as a result of one standard deviation change in several neighborhood amenities for women (men). Another key finding is that controlling for residential self-selection has substantive implications. To our knowledge, this has not been yet documented within a full information maximum likelihood framework. PMID:27459276
Comparison of the 1D flux theory with a 2D hydrodynamic secondary settling tank model.
Ekama, G A; Marais, P
2004-01-01
The applicability of the 1D idealized flux theory (1DFT) for design of secondary settling tanks (SSTs) is evaluated by comparing its predicted maximum surface overflow (SOR) and solids loading (SLR) rates with that calculated from the 2D hydrodynamic model SettlerCAD using as a basis 35 full scale SST stress tests conducted on different SSTs with diameters from 30 to 45m and 2.25 to 4.1 m side water depth, with and without Stamford baffles. From the simulations, a relatively consistent pattern appeared, i.e. that the 1DFT can be used for design but its predicted maximum SLR needs to be reduced by an appropriate flux rating, the magnitude of which depends mainly on SST depth and hydraulic loading rate (HLR). Simulations of the sloping bottom shallow (1.5-2.5 m SWD) Dutch SSTs tested by STOWa and the Watts et al. SST, all with doubled SWDs, and the Darvill new (4.1 m) and old (2.5 m) SSTs with interchanged depths, were run to confirm the sensitivity of the flux rating to depth and HLR. Simulations with and without a Stamford baffle were also done. While the design of the internal features of the SST, such as baffling, have a marked influence on the effluent SS concentration for underloaded SSTs, these features appeared to have only a small influence on the flux rating, i.e. capacity, of the SST, In the meantime until more information is obtained, it would appear that from the simulations so far that the flux rating of 0.80 of the 1DFT maximum SLR recommended by Ekama and Marais remains a reasonable value to apply in the design of full scale SSTs--for deep SSTs (4 m SWD) the flux rating could be increased to 0.85 and for shallow SSTs (2.5 m SWD) decreased to 0.75. It is recommended that (i) while the apparent interrelationship between SST flux rating and depth suggests some optimization of the volume of the SST, that this be avoided and that (ii) the depth of the SST be designed independently of the surface area as is usually the practice and once selected, the appropriate flux rating is applied to the 1DFT estimate of the surface area.
NASA Astrophysics Data System (ADS)
Caticha, Ariel
2007-11-01
What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, T.; Pan, L.; Kang, Z.
2017-09-01
With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.
Changes in maximum bite force related to extension of the head.
Hellsing, E; Hagberg, C
1990-05-01
The maximum bite force and position of the hyoid bone during natural and extended head posture were studied in 15 adults. All participants had normal occlusions and full dentitions. In addition, there were no signs or symptoms of craniomandibular disorders. The bite force was measured with a bite force sensor placed between the first molars. Six registrations of gradually increasing bite force up to a maximum were made with randomized natural and extended head postures. With one exception, the mean maximum bite force value was found to be higher for every subject with extended head posture compared to natural head posture. The sample mean was 271.6 Newton in natural head posture and 321.5 Newton with 20 degrees extension. With changed head posture, the cephalometric measurements pointed towards a changed position of the hyoid bone in relation to the mandible and pharyngeal airway. The cephalometric changes in the position of the hyoid bone could be due to a changed interplay between the elevator and depressor muscle groups. This was one factor which could have influenced the registered maximum bite force.
Lischer, Heidi E L; Excoffier, Laurent; Heckel, Gerald
2014-04-01
Phylogenetic reconstruction of the evolutionary history of closely related organisms may be difficult because of the presence of unsorted lineages and of a relatively high proportion of heterozygous sites that are usually not handled well by phylogenetic programs. Genomic data may provide enough fixed polymorphisms to resolve phylogenetic trees, but the diploid nature of sequence data remains analytically challenging. Here, we performed a phylogenomic reconstruction of the evolutionary history of the common vole (Microtus arvalis) with a focus on the influence of heterozygosity on the estimation of intraspecific divergence times. We used genome-wide sequence information from 15 voles distributed across the European range. We provide a novel approach to integrate heterozygous information in existing phylogenetic programs by repeated random haplotype sampling from sequences with multiple unphased heterozygous sites. We evaluated the impact of the use of full, partial, or no heterozygous information for tree reconstructions on divergence time estimates. All results consistently showed four deep and strongly supported evolutionary lineages in the vole data. These lineages undergoing divergence processes split only at the end or after the last glacial maximum based on calibration with radiocarbon-dated paleontological material. However, the incorporation of information from heterozygous sites had a significant impact on absolute and relative branch length estimations. Ignoring heterozygous information led to an overestimation of divergence times between the evolutionary lineages of M. arvalis. We conclude that the exclusion of heterozygous sites from evolutionary analyses may cause biased and misleading divergence time estimates in closely related taxa.
Spatial Searching for Solar Physics Data
NASA Astrophysics Data System (ADS)
Hourcle, Joseph; Spencer, J. L.; The VSO Team
2013-07-01
The Virtual Solar Observatory allows searching across many collections of solar physics data, but does not yet allow a researcher to search based on the location and extent of the observation, other than by selecting general categories such as full disk or off limb. High resolution instruments that observe only a portion of the the solar disk require greater specificity than is currently available. We believe that finer-grained spatial searching will allow for improved access to data from existing instruments such as TRACE, XRT and SOT, and well as from upcoming missions such as ATST and IRIS. Our proposed solution should also help scientists to search on the field of view of full-disk images that are out of the Sun-Earth line, such as STEREO/EUVI and obserations from the upcoming Solar Orbiter and Solar Probe Plus missions. We present our current work on cataloging sub field images for spatial searching so that researchers can more easily search for observations of a given feature of interest, with the intent of soliciting information about researcher's requirements and recommendations for further improvements.Abstract (2,250 Maximum Characters): The Virtual Solar Observatory allows searching across many collections of solar physics data, but does not yet allow a researcher to search based on the location and extent of the observation, other than by selecting general categories such as full disk or off limb. High resolution instruments that observe only a portion of the the solar disk require greater specificity than is currently available. We believe that finer-grained spatial searching will allow for improved access to data from existing instruments such as TRACE, XRT and SOT, and well as from upcoming missions such as ATST and IRIS. Our proposed solution should also help scientists to search on the field of view of full-disk images that are out of the Sun-Earth line, such as STEREO/EUVI and obserations from the upcoming Solar Orbiter and Solar Probe Plus missions. We present our current work on cataloging sub field images for spatial searching so that researchers can more easily search for observations of a given feature of interest, with the intent of soliciting information about researcher's requirements and recommendations for further improvements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PROVISIONS GOVERNING THE SENIOR COMMUNITY SERVICE EMPLOYMENT... are conducted in a manner to provide, to the maximum extent practicable, full and open competition in...
Inflight source noise of an advanced full-scale single-rotation propeller
NASA Technical Reports Server (NTRS)
Woodward, Richard P.; Loeffler, Irvin J.
1991-01-01
Flight tests to define the far field tone source at cruise conditions were completed on the full scale SR-7L advanced turboprop which was installed on the left wing of a Gulfstream II aircraft. This program, designated Propfan Test Assessment (PTA), involved aeroacoustic testing of the propeller over a range of test conditions. These measurements defined source levels for input into long distance propagation models to predict en route noise. Inflight data were taken for 7 test cases. The sideline directivities measured by the Learjet showed expected maximum levels near 105 degrees from the propeller upstream axis. However, azimuthal directivities based on the maximum observed sideline tone levels showed highest levels below the aircraft. An investigation of the effect of propeller tip speed showed that the tone level of reduction associated with reductions in propeller tip speed is more significant in the horizontal plane than below the aircraft.
NASA Astrophysics Data System (ADS)
Singleton, V. L.; Gantzer, P.; Little, J. C.
2007-02-01
An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.
Compact pulse generators with soft ferromagnetic cores driven by gunpowder and explosive.
Ben, Chi; He, Yong; Pan, Xuchao; Chen, Hong; He, Yuan
2015-12-01
Compact pulse generators which utilized soft ferromagnets as an initial energy carrier inside multi-turn coil and hard ferromagnets to provide the initial magnetic field outside the coil have been studied. Two methods of reducing the magnetic flux in the generators have been studied: (1) by igniting gunpowder to launch the core out of the generator, and (2) by detonating explosives that demagnetize the core. Several types of compact generators were explored to verify the feasibility. The generators with an 80-turn coil that utilize gunpowder were capable of producing pulses with amplitude 78.6 V and the full width at half maximum was 0.41 ms. The generators with a 37-turn coil that utilize explosive were capable of producing pulses with amplitude 1.41 kV and the full width at half maximum was 11.68 μs. These two methods were both successful, but produce voltage waveforms with significantly different characteristics.
A Low-Noise Germanium Ionization Spectrometer for Low-Background Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Colaresi, Jim; Collar, Juan I.
2016-12-01
Recent progress on the development of very low energy threshold high purity germanium ionization spectrometers has produced an instrument of 1.2 kg mass and excellent noise performance. The detector was installed in a low-background cryostat intended for use in a low mass, WIMP dark matter direct detection search. The integrated detector and low background cryostat achieved noise performance of 98 eV full-width half-maximum of an input electronic pulse generator peak and gamma-ray energy resolution of 1.9 keV full-width half-maximum at the 60Co gamma-ray energy of 1332 keV. This Transaction reports the thermal characterization of the low-background cryostat, specifications of themore » newly prepared 1.2 kg p-type point contact germanium detector, and the ionization spectroscopy – energy resolution and energy threshold – performance of the integrated system.« less
Eby, Joshua; Leembruggen, Madelyn; Suranyi, Peter; ...
2016-12-15
Axion stars, gravitationally bound states of low-energy axion particles, have a maximum mass allowed by gravitational stability. Weakly bound states obtaining this maximum mass have sufficiently large radii such that they are dilute, and as a result, they are well described by a leading-order expansion of the axion potential. Here, heavier states are susceptible to gravitational collapse. Inclusion of higher-order interactions, present in the full potential, can give qualitatively different results in the analysis of collapsing heavy states, as compared to the leading-order expansion. In this work, we find that collapsing axion stars are stabilized by repulsive interactions present inmore » the full potential, providing evidence that such objects do not form black holes. In the last moments of collapse, the binding energy of the axion star grows rapidly, and we provide evidence that a large amount of its energy is lost through rapid emission of relativistic axions.« less
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Designing-and Redesigning-Information Services for Maximum Impact.
ERIC Educational Resources Information Center
Jones, Rebecca; Dysart, Jane
2002-01-01
Discusses innovative information services, including new services and the redesign of existing services. Describes the development process, including assessing the market and developing a marketing plan; and explains the implementation process, including monitoring client satisfaction and quality control. (LRW)
Automated Network Mapping and Topology Verification
2016-06-01
collection of information includes amplifying data about the networked devices such as hardware details, logical addressing schemes, 7 operating ...collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations ...maximum 200 words) The current military reliance on computer networks for operational missions and administrative duties makes network
Band-aid for information loss from black holes
NASA Astrophysics Data System (ADS)
Israel, Werner; Yun, Zinkoo
2010-12-01
We summarize, simplify and extend recent work showing that small deviations from exact thermality in Hawking radiation, first uncovered by Kraus and Wilczek, have the capacity to carry off the maximum information content of a black hole. This goes a considerable way toward resolving a long-standing “information loss paradox.”
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
... Proposed Information Collection for Public Comment; Notice of Public Interest for the FY12 Transformation.... This Notice also lists the following information: Title of Proposal: FY12 Transformation Initiative... under the Transformation Initiative (TI) account. The maximum grant performance period is for 24 months...
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
Konrad, Christopher P.; Munn, Mark D.
2016-01-01
Benthic chlorophyll a (BChl a) and environmental factors that influence algal biomass were measured monthly from February through October in 22 streams from three agricultural regions of the United States. At-site maximum BChl a ranged from 14 to 406 mg/m2 and generally varied with dissolved inorganic nitrogen (DIN): 8 out of 9 sites with at-site median DIN >0.5 mg/L had maximum BChl a >100 mg/m2. BChl aaccrued and persisted at levels within 50% of at-site maximum for only one to three months. No dominant seasonal pattern for algal biomass accrual was observed in any region. A linear model with DIN, water surface gradient, and velocity accounted for most of the cross-site variation in maximum chlorophyll a(adjusted R2 = 0.7), but was no better than a single value of DIN = 0.5 mg/L for distinguishing between low and high-biomass sites. Studies of nutrient enrichment require multiple samples to estimate algal biomass with sufficient precision given the magnitude of temporal variability of algal biomass. An effective strategy for regional stream assessment of nutrient enrichment could be based on a relation between maximum BChl a and DIN based on repeat sampling at sites selected to represent a gradient in nutrients and application of the relation to a larger number of sites with synoptic nutrient information.
1991-07-01
1525 C1:y: daho Falls State: r Zip: 83413 Telephoue Hunber: (2 16) 65-1763 4. Facilities Location: Number & Steet: Naval Construction Bat.tallcn...ed into the POTW: (a) Pollutants which create a fire or explosion hazard in the POTW; (b) Pollutants which will cause corrosive structural damage to...Haylon Located in the laboratory (1) 15-1b C02 Located in the trailer 482 / 4.3.8 Maximum Hypothetical Accident ( Explosion ) The maximum hypothetical
Li, Jian; Kirkwood, Robert A; Baker, Luke J; Bosworth, David; Erotokritou, Kleanthis; Banerjee, Archan; Heath, Robert M; Natarajan, Chandra M; Barber, Zoe H; Sorel, Marc; Hadfield, Robert H
2016-06-27
We present low temperature nano-optical characterization of a silicon-on-insulator (SOI) waveguide integrated SNSPD. The SNSPD is fabricated from an amorphous Mo83Si17 thin film chosen to give excellent substrate conformity. At 350 mK, the SNSPD exhibits a uniform photoresponse under perpendicular illumination, corresponding to a maximum system detection efficiency of approximately 5% at 1550 nm wavelength. Under these conditions 10 Hz dark count rate and 51 ps full width at half maximum (FWHM) timing jitter is observed.
Thermodynamics of firms' growth
Zambrano, Eduardo; Hernando, Alberto; Hernando, Ricardo; Plastino, Angelo
2015-01-01
The distribution of firms' growth and firms' sizes is a topic under intense scrutiny. In this paper, we show that a thermodynamic model based on the maximum entropy principle, with dynamical prior information, can be constructed that adequately describes the dynamics and distribution of firms' growth. Our theoretical framework is tested against a comprehensive database of Spanish firms, which covers, to a very large extent, Spain's economic activity, with a total of 1 155 142 firms evolving along a full decade. We show that the empirical exponent of Pareto's law, a rule often observed in the rank distribution of large-size firms, is explained by the capacity of economic system for creating/destroying firms, and that can be used to measure the health of a capitalist-based economy. Indeed, our model predicts that when the exponent is larger than 1, creation of firms is favoured; when it is smaller than 1, destruction of firms is favoured instead; and when it equals 1 (matching Zipf's law), the system is in a full macroeconomic equilibrium, entailing ‘free’ creation and/or destruction of firms. For medium and smaller firm sizes, the dynamical regime changes, the whole distribution can no longer be fitted to a single simple analytical form and numerical prediction is required. Our model constitutes the basis for a full predictive framework regarding the economic evolution of an ensemble of firms. Such a structure can be potentially used to develop simulations and test hypothetical scenarios, such as economic crisis or the response to specific policy measures. PMID:26510828
Thermodynamics of firms' growth.
Zambrano, Eduardo; Hernando, Alberto; Fernández Bariviera, Aurelio; Hernando, Ricardo; Plastino, Angelo
2015-11-06
The distribution of firms' growth and firms' sizes is a topic under intense scrutiny. In this paper, we show that a thermodynamic model based on the maximum entropy principle, with dynamical prior information, can be constructed that adequately describes the dynamics and distribution of firms' growth. Our theoretical framework is tested against a comprehensive database of Spanish firms, which covers, to a very large extent, Spain's economic activity, with a total of 1,155,142 firms evolving along a full decade. We show that the empirical exponent of Pareto's law, a rule often observed in the rank distribution of large-size firms, is explained by the capacity of economic system for creating/destroying firms, and that can be used to measure the health of a capitalist-based economy. Indeed, our model predicts that when the exponent is larger than 1, creation of firms is favoured; when it is smaller than 1, destruction of firms is favoured instead; and when it equals 1 (matching Zipf's law), the system is in a full macroeconomic equilibrium, entailing 'free' creation and/or destruction of firms. For medium and smaller firm sizes, the dynamical regime changes, the whole distribution can no longer be fitted to a single simple analytical form and numerical prediction is required. Our model constitutes the basis for a full predictive framework regarding the economic evolution of an ensemble of firms. Such a structure can be potentially used to develop simulations and test hypothetical scenarios, such as economic crisis or the response to specific policy measures. © 2015 The Authors.
A Procedure for Setting Environmentally Safe Total Maximum Daily Loads (TMDLs) for Selenium
A. Dennis Lemly
2002-01-01
This article presents a seven-step procedure for developing environmentally safe total maximum daily loads (TMDLs) for selenium. The need for this information stems from recent actions taken by the U.S. Environmental Protection Agency (EPA) that may require TMDLs for selenium and other contaminants that are impairing water bodies. However, there is no technical...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Approximate bandpass and frequency response models of the difference of Gaussian filter
NASA Astrophysics Data System (ADS)
Birch, Philip; Mitra, Bhargav; Bangalore, Nagachetan M.; Rehman, Saad; Young, Rupert; Chatwin, Chris
2010-12-01
The Difference of Gaussian (DOG) filter is widely used in optics and image processing as, among other things, an edge detection and correlation filter. It has important biological applications and appears to be part of the mammalian vision system. In this paper we analyse the filter and provide details of the full width half maximum, bandwidth and frequency response in order to aid the full characterisation of its performance.
Integrating Experimentation, Modeling, and Visualization Through Full-Field Methods (Preprint)
2009-04-01
including fatigue, fatigue-crack growth, tribology , fracture toughness, Figure 1. Full-field, in-plane, maximum principal stress in a fully lamellar...studied, and 3) the correlation between 1) and 2). The emphases in this paper will be on the first and second areas, but the need for work in the third ...problems, concentrating on displacements rather than strains is appropriate for this work. The top figure shows the overall displacements, after rigid- body
NASA Astrophysics Data System (ADS)
Jin, Zhenyu; Lin, Jing; Liu, Zhong
2008-07-01
By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.
2014-01-01
SYMBOLS Acronym Definition SPP Surface Plasmon Polaritons RHC Right-Hand Circular LHC Left-Hand Circular FIB Focused Ion Beam RHS Right-Handed Spiral CCD Charge-Coupled Detector FWHM Full Width at Half Maximum
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
... and number of counts per inmate for a maximum of three convicted offenses per inmate Prior time spent... military service, date and type of last discharge BJS uses the information gathered in NCRP in published...
Jordan, P.R.; Hart, R.J.
1985-01-01
A streamflow routing model was used to calculate the transit losses and traveltimes. Channel and aquifer characteristics, and the model control parameters, were estimated from available data and then verified to the extent possible by comparing model simulated streamflow to observed streamflow at streamflow gaging stations. Transit losses and traveltimes for varying reservoir release rates and durations then were simulated for two different antecedent streamflow (drought) conditions. For the severe-drought antecedent-streamflow condition, it was assumed that only the downstream water use requirement would be released from the reservoir. For a less severe drought (LSD) antecedent streamflow condition, it was assumed than any releases from Marion Lake for water supply use downstream, would be in addition to a nominal dry weather release of 5 cu ft/sec. Water supply release rates of 10 and 25 cu ft/sec for the severe drought condition and 5, 10, and 25 cu ft/sec for the less severe drought condition were simulated for periods of 28 and 183 days commencing on July 1. Transit losses for the severe drought condition for all reservoir release rates and durations ranged from 12% to 78% of the maximum downstream flow rate and from 27% to 91% of the total volume of reservoir storage released. For the LSD condition, transit losses ranged from 7% to 29% of the maximum downstream flow rate and from 10% to 48% of the total volume of release. The 183-day releases had larger total transit losses, but losses on a percentage basis were less than the losses for the 28-day release period for both antecedent streamflow conditions. Traveltimes to full response (80% of the maximum downstream flow rate), however, showed considerable variation. For the release of 5 cu ft/sec during LSD conditions, base flow exceeded 80% of the maximum flow rate near the confluence; the traveltime to full response was undefined for those simulations. For the releases of 10 and 25 cu ft/sec during the same drought condition, traveltimes to full response ranged from 4.4 to 6.5 days. For releases of 10 and 25 cu ft/sec during severe drought conditions, traveltimes to full response near the confluence with the Neosho River ranged from 8.3 to 93 days. (Lantz-PTT)
Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han Yuecai; Hu Yaozhong; Song Jian, E-mail: jsong2@math.rutgers.edu
2013-04-15
We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need tomore » develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.« less
The 3.5 micron light curves of long period variable stars. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Strecker, D. W.
1973-01-01
Infrared observations at an effective wavelength of 3.5 microns of a selected group of long period variable (LPV) stars are presented. Mira type and semiregular stars of M, S, and C spectral classifications were monitored throughout the full cycle of variability. Although the variable infrared radiation does not exactly repeat in intensity or time, the regularity is sufficient to produce mean 3.5 micron light curves. The 3.5 micron maximum radiation lags the visual maximum by about one-seventh of a cycle, while the minimum 3.5 micron intensity occurs nearly one-half cycle after infrared maximum. In some stars, there are inflections or humps on the ascending portion of the 3.5 micron light curve which may also be seen in the visual variations.
Schneider, Thomas D
2010-10-01
The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is [Formula in text] joules per bit (kB is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2=0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal.
Schneider, Thomas D.
2010-01-01
The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is joules per bit ( is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2 = 0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal. PMID:20562221
Low-Rate Information Transmission (LRIT) - NOAA Satellite Information
bulletins and notices and an updated area where further explanations can be found. GOES-East Full Disk Image Viewed Using LRIT GOES-EAST full disk image viewed using LRIT. Zoomed In Portion of the LRIT Full Disk Image. A zoomed in portion of the LRIT full disk image. Contact Information: LRIT / EMWIN: Paul Seymour
Zhang, Y M; Huang, G; Lu, H W; He, Li
2015-08-15
A key issue facing integrated water resources management and water pollution control is to address the vague parametric information. A full credibility-based chance-constrained programming (FCCP) method is thus developed by introducing the new concept of credibility into the modeling framework. FCCP can deal with fuzzy parameters appearing concurrently in the objective and both sides of the constraints of the model, but also provide a credibility level indicating how much confidence one can believe the optimal modeling solutions. The method is applied to Heshui River watershed in the south-central China for demonstration. Results from the case study showed that groundwater would make up for the water shortage in terms of the shrinking surface water and rising water demand, and the optimized total pumpage of groundwater from both alluvial and karst aquifers would exceed 90% of its maximum allowable levels when credibility level is higher than or equal to 0.9. It is also indicated that an increase in credibility level would induce a reduction in cost for surface water acquisition, a rise in cost from groundwater withdrawal, and negligible variation in cost for water pollution control. Copyright © 2015 Elsevier B.V. All rights reserved.
Performance seeking control: Program overview and future directions
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1993-01-01
A flight test evaluation of the performance-seeking control (PSC) algorithm on the NASA F-15 highly integrated digital electronic control research aircraft was conducted for single-engine operation at subsonic and supersonic speeds. The model-based PSC system was developed with three optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, and maximum thrust at maximum dry and full afterburner throttle settings. Subsonic and supersonic flight testing were conducted at the NASA Dryden Flight Research Facility covering the three PSC optimization modes and over the full throttle range. Flight results show substantial benefits. In the maximum thrust mode, thrust increased up to 15 percent at subsonic and 10 percent at supersonic flight conditions. The minimum fan turbine inlet temperature mode reduced temperatures by more than 100 F at high altitudes. The minimum fuel flow mode results decreased fuel consumption up to 2 percent in the subsonic regime and almost 10 percent supersonically. These results demonstrate that PSC technology can benefit the next generation of fighter or transport aircraft. NASA Dryden is developing an adaptive aircraft performance technology system that is measurement based and uses feedback to ensure optimality. This program will address the technical weaknesses identified in the PSC program and will increase performance gains.
Elnashar, Magdy M; Awad, Ghada E; Hassan, Mohamed E; Mohy Eldin, Mohamed S; Haroun, Bakry M; El-Diwany, Ahmed I
2014-01-01
β-Galactosidase (β-gal) was immobilized by covalent binding on novel κ-carrageenan gel beads activated by two-step method; the gel beads were soaked in polyethyleneimine followed by glutaraldehyde. 2(2) full-factorial central composite experiment designs were employed to optimize the conditions for the maximum enzyme loading efficiency. 11.443 U of enzyme/g gel beads was achieved by soaking 40 units of enzyme with the gel beads for eight hours. Immobilization process increased the pH from 4.5 to 5.5 and operational temperature from 50 to 55 °C compared to the free enzyme. The apparent K(m) after immobilization was 61.6 mM compared to 22.9 mM for free enzyme. Maximum velocity Vmax was 131.2 μ mol · min(-1) while it was 177.1 μ mol · min(-1) for free enzyme. The full conversion experiment showed that the immobilized enzyme form is active as that of the free enzyme as both of them reached their maximum 100% relative hydrolysis at 4 h. The reusability test proved the durability of the κ-carrageenan beads loaded with β -galactosidase for 20 cycles with retention of 60% of the immobilized enzyme activity to be more convenient for industrial uses.
Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Conners, Timothy R.; Sims, Robert L.
1998-01-01
Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.
High-power single-stage thulium-doped superfluorescent fiber source
NASA Astrophysics Data System (ADS)
Hu, Z. Y.; Yan, P.; Liu, Q.; Ji, E. C.; Xiao, Q. R.; Gong, M. L.
2015-01-01
In this paper, we report a high-power thulium (Tm)-doped superfluorescent fiber source (SFS) in the 2-μm spectral region. The SFS is based on double angle-cleaved facet operation and uses a simple single-stage geometry. The copropagating amplified spontaneous emission (ASE) yields a maximum output of 20.7 W at a center wavelength of 1,960.7 nm, with a full width at half maximum (FWHM) of ~45 nm. The counterpropagating ASE yields a maximum output of 25.2 W at a center wavelength of 1,948.2 nm, with a FWHM of ~50 nm. The maximum combined output of the SFS is as much as 45.9 W, which corresponds to a slope efficiency of 38.9 %. In addition, a model of the ~2 μm SFS in Tm-doped silica fibers pumped at ~790 nm is developed, and the influence of fiber length and end-facet reflectivity on the ASE output performance and the parasitic lasing threshold are studied numerically.
Clinical Phenotypes and Prognostic Full-Field Electroretinographic Findings in Stargardt Disease
ZAHID, SARWAR; JAYASUNDERA, THIRAN; RHOADES, WILLIAM; BRANHAM, KARI; KHAN, NAHEED; NIZIOL, LESLIE M.; MUSCH, DAVID C.; HECKENLIVELY, JOHN R.
2013-01-01
PURPOSE To investigate the relationships between clinical and full-field electroretinographic (ERG) findings and progressive loss of visual function in Stargardt disease. DESIGN Retrospective cohort study. METHODS We performed a retrospective review of data from 198 patients with Stargardt disease. Measures of visual function over time, including visual acuity, quantified Goldmann visual fields, and full-field ERG data were recorded. Data were analyzed using SAS statistical software. Subgroup analyses were performed on 148 patients with ERG phenotypic data, 46 patients with longitudinal visual field data, and 92 patients with identified ABCA4 mutations (46 with 1 mutation, and 47 with 2 or more mutations). RESULTS Of 46 patients with longitudinal visual field data, 8 patients with faster central scotoma progression rates had significantly worse scotopic B-wave amplitudes at their initial assessment than 20 patients with stable scotomata (P = .014) and were more likely to have atrophy beyond the arcades (P = .047). Overall, 47.3% of patients exhibited abnormal ERG results, with rod–cone dysfunction in 14.2% of patients, cone–rod dysfunction in 17.6% of patients, and isolated cone dysfunction in 15.5% of patients. Abnormal values in certain ERG parameters were associated significantly with (maximum-stimulation A- and B-wave amplitudes) or tended toward (photopic and scotopic B-wave amplitudes) a higher mean rate of central scotoma progression compared with those patients with normal ERG values. Scotoma size and ERG parameters differed significantly between those with a single mutation versus those with multiple mutations. CONCLUSIONS Full-field ERG examination provides clinically relevant information regarding the severity of Stargardt disease, likelihood of central scotoma expansion, and visual acuity deterioration. Patients also may exhibit an isolated cone dystrophy on ERG examination. PMID:23219216
Dick Stanley; Bruce Jackson
1995-01-01
The cost-effectiveness of park operations is often neglected because information is laborious to compile. The information, however, is critical if we are to derive maximum benefit from scarce resources. This paper describes an automated system for calculating cost-effectiveness ratios with minimum effort using data from existing data bases.
48 CFR 39.103 - Modular contracting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... increments to take advantage of any evolution in technology or needs that occur during implementation and use... CATEGORIES OF CONTRACTING ACQUISITION OF INFORMATION TECHNOLOGY General 39.103 Modular contracting. (a) This... technology. Consistent with the agency's information technology architecture, agencies should, to the maximum...
International Towing Tank Conference ITTC Symbols and Terminology List. Final Version 1996
1997-05-13
law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB...of water-plane aft of mWA midship 2 A AWF Area of water-plane forward mWF of midship 2 A AX Area of maximum transverse mX section 2 B B Beam or...design water line B BWL Maximum moulded breadth mWL at design water line B BX Breadth, moulded of mX maximum section area at design water line d,T T
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
33 CFR 183.33 - Maximum weight capacity: Inboard and inboard-outdrive boats.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (iv) Weight of full permanent fuel tanks. (3) “Machinery weight” is the combined weight of installed engines or motors, control equipment, drive units, and batteries. [CGD 72-61R, 37 FR 15782, Aug. 4, 1972...
ERIC Educational Resources Information Center
Tauchert, Wolfgang; And Others
1991-01-01
Describes the PADOK-II project in Germany, which was designed to give information on the effects of linguistic algorithms on retrieval in a full-text database, the German Patent Information System (GPI). Relevance assessments are discussed, statistical evaluations are described, and searches are compared for the full-text section versus the…
Shade response of a full size TESSERA module
NASA Astrophysics Data System (ADS)
Slooff, Lenneke H.; Carr, Anna J.; de Groot, Koen; Jansen, Mark J.; Okel, Lars; Jonkman, Rudi; Bakker, Jan; de Gier, Bart; Harthoorn, Adriaan
2017-08-01
A full size TESSERA shade tolerant module has been made and was tested under various shadow conditions. The results show that the dedicated electrical interconnection of cells result in an almost linear response under shading. Furthermore, the voltage at maximum power point is almost independent of the shadow. This decreases the demand on the voltage range of the inverter. The increased shadow linearity results in a calculated increase in annual yield of about 4% for a typical Dutch house.
ERIC Educational Resources Information Center
Ball, Alice Dulany
The National Commission on Libraries and Information Science's (NCLIS) nationwide information program is based in part on the sharing of resources. The United States Book Exchange (USBE) and its existing services may have a role in this program, since the USBE's major function is the preservation and maximum utilization of publications through…
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
MEAN MAXIMUM TEMPERATURE DATA - U.S HISTORICAL CLIMATOLOGY NETWORK (HCN)
The Carbon Dioxide Information Analysis Center, which includes the World Data Center-A for Atmospheric Trace Gases, is the primary global-change data and information analysis center of the U.S. Department of Energy (DOE). CDIACs scope includes potentially anything and everything...
Estimating Mutual Information for High-to-Low Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaud, Isaac James; Williams, Brian J.; Weaver, Brian Phillip
Presentation shows that KSG 2 is superior to KSG 1 because it scales locally automatically; KSG estimators are limited to a maximum MI due to sample size; LNC extends the capability of KSG without onerous assumptions; iLNC allows LNC to estimate information gain.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Farzandipour, Mehrdad; Meidani, Zahra; Riazi, Hossein; Sadeqi Jabali, Monireh
2016-12-01
Considering the integral role of understanding users' requirements in information system success, this research aimed to determine functional requirements of nursing information systems through a national survey. Delphi technique method was applied to conduct this study through three phases: focus group method modified Delphi technique and classic Delphi technique. A cross-sectional study was conducted to evaluate the proposed requirements within 15 general hospitals in Iran. Forty-three of 76 approved requirements were clinical, and 33 were administrative ones. Nurses' mean agreements for clinical requirements were higher than those of administrative requirements; minimum and maximum means of clinical requirements were 3.3 and 3.88, respectively. Minimum and maximum means of administrative requirements were 3.1 and 3.47, respectively. Research findings indicated that those information system requirements that support nurses in doing tasks including direct care, medicine prescription, patient treatment management, and patient safety have been the target of special attention. As nurses' requirements deal directly with patient outcome and patient safety, nursing information systems requirements should not only address automation but also nurses' tasks and work processes based on work analysis.
Competition between Homophily and Information Entropy Maximization in Social Networks
Zhao, Jichang; Liang, Xiao; Xu, Ke
2015-01-01
In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994
Weight loss and jaundice in healthy term newborns in partial and full rooming-in.
Zuppa, Antonio Alberto; Sindico, Paola; Antichi, Eleonora; Carducci, Chiara; Alighieri, Giovanni; Cardiello, Valentina; Cota, Francesco; Romagnoli, Costantino
2009-09-01
An inadequate start of breastfeeding has been associated with reduced caloric intake, excessive weight loss and high serum bilirubin levels in the first days of life. The rooming-in has been proposed as an optimal model for the promotion of breastfeeding. The aim of this study was to compare two different feeding models (partial and full rooming-in) to evaluate differences as regard to weight loss, hyperbilirubinemia and prevalence of exclusive breastfeeding at discharge. A total of 903 healthy term newborns have been evaluated; all the newborns were adequate for gestational age, with birth weight > or = 2800 g and gestational age > or = 37 weeks. The maximum weight loss (mean +/- SD), expressed as percent of birth weight, was not different in the two models (partial vs. full rooming-in 5.8% +/- 1.7%vs. 6% +/- 1.7%). A weight loss > or = 10% occurred in less than 1% in both groups. There were no statistical differences neither as mean of total serum bilirubin (partial vs. full rooming-in 10.5 +/- 3.3 vs. 10.1 +/- 2.9 mg/dl), nor as prevalence of hyperbilirubinemia (total serum bilirubin > or = 12 mg/dl). The prevalence of severe hyperbilirubinemia (total serum bilirubin > or = 18 mg/dl) and the use of phototherapy were not statistically different. Maximum weight loss was similar in the two models, even dividing by total serum bilirubin levels. At the discharge, exclusively breastfed newborns were 81% in full rooming-in and 42.9% in partial rooming-in. In conclusion, our results allow considering our assistance models similar as regards to severe hyperbilirubinemia and pathological weight loss in term healthy newborns even if full rooming-in is associated with higher prevalence of exclusive breastfeeding at the discharge.
Variability of Arctic Sea Ice as Viewed from Space
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.
1998-01-01
Over the past 20 years, satellite passive-microwave radiometry has provided a marvelous means for obtaining information about the variability of the Arctic sea ice cover and particularly about sea ice concentrations (% areal coverages) and from them ice extents and the lengths of the sea ice season. This ability derives from the sharp contrast between the microwave emissions of sea ice versus liquid water and allows routine monitoring of the vast Arctic sea ice cover, which typically varies in extent from a minimum of about 8,000,000 sq km in September to a maximum of about 15,000,000 sq km in March, the latter value being over 1.5 times the area of either the United States or Canada. The vast Arctic ice cover has many impacts, including hindering heat, mass, and y momentum exchanges between the oceans and the atmosphere, reducing the amount of solar radiation absorbed at the Earth's surface, affecting freshwater transports and ocean circulation, and serving as a vital surface for many species of polar animals. These direct impacts also lead to indirect impacts, including effects on local and perhaps global atmospheric temperatures, effects that are being examined in general circulation modeling studies, where preliminary results indicate that changes on the order of a few percent sea ice concentration can lead to temperature changes of 1 K or greater even in local areas outside of the sea ice region. Satellite passive-microwave data for November 1978 through December 1996 reveal marked regional and interannual variabilities in both the ice extents and the lengths of the sea ice season, as well as some statistically significant trends. For the north polar ice cover as a whole, maximum ice extents varied over a range of 14,700,000 - 15,900,000 km(2), while individual regions showed much greater percentage variations, e.g., with the Greenland Sea experiencing a range of 740,000 - 1,1110,000 km(2) in its yearly maximum ice coverage. Although variations from year to year and region to region are large, overall the Arctic ice extents did show a statistically significant, 2.8%/ decade negative trend over the 18.2-year period. Ice season lengths, which vary from only a few weeks near the ice margins to the full year in the large region of perennial ice coverage, also experienced interannual variability, and mapping their trends allows detailed geographic information on exactly where the ice season lengthened and where it shortened. Over the 18 years, ice season lengthening occurred predominantly in the western hemisphere and was strongest in the western Labrador Sea, while ice season shortening occurred predominantly in the eastern hemisphere and was strongest in the eastern Barents Sea. Much information about other important Arctic sea ice variables has also been obtained from satellite data, including information about melt ponding, temperature, snow cover, and ice velocities. For instance, maps of ice velocities have now been made from satellite scatterometry data, including information about melt ponding, temperature, snow cover, and ice velocities.
Predicting protein β-sheet contacts using a maximum entropy-based correlated mutation measure.
Burkoff, Nikolas S; Várnai, Csilla; Wild, David L
2013-03-01
The problem of ab initio protein folding is one of the most difficult in modern computational biology. The prediction of residue contacts within a protein provides a more tractable immediate step. Recently introduced maximum entropy-based correlated mutation measures (CMMs), such as direct information, have been successful in predicting residue contacts. However, most correlated mutation studies focus on proteins that have large good-quality multiple sequence alignments (MSA) because the power of correlated mutation analysis falls as the size of the MSA decreases. However, even with small autogenerated MSAs, maximum entropy-based CMMs contain information. To make use of this information, in this article, we focus not on general residue contacts but contacts between residues in β-sheets. The strong constraints and prior knowledge associated with β-contacts are ideally suited for prediction using a method that incorporates an often noisy CMM. Using contrastive divergence, a statistical machine learning technique, we have calculated a maximum entropy-based CMM. We have integrated this measure with a new probabilistic model for β-contact prediction, which is used to predict both residue- and strand-level contacts. Using our model on a standard non-redundant dataset, we significantly outperform a 2D recurrent neural network architecture, achieving a 5% improvement in true positives at the 5% false-positive rate at the residue level. At the strand level, our approach is competitive with the state-of-the-art single methods achieving precision of 61.0% and recall of 55.4%, while not requiring residue solvent accessibility as an input. http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/
Code of Federal Regulations, 2010 CFR
2010-10-01
... HUMANITIES INSTITUTE OF MUSEUM AND LIBRARY SERVICES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... in a manner providing full and open competition consistent with the standards of § 1183.36. Some of... ensure maximum open and free competition. Also, grantees and subgrantees will not preclude potential...
Code of Federal Regulations, 2011 CFR
2011-10-01
... HUMANITIES INSTITUTE OF MUSEUM AND LIBRARY SERVICES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... in a manner providing full and open competition consistent with the standards of § 1183.36. Some of... ensure maximum open and free competition. Also, grantees and subgrantees will not preclude potential...
The maximum efficiency of nano heat engines depends on more than temperature
NASA Astrophysics Data System (ADS)
Woods, Mischa; Ng, Nelly; Wehner, Stephanie
Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
Maximum Temperature Detection System for Integrated Circuits
NASA Astrophysics Data System (ADS)
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
MEM application to IRAS CPC images
NASA Technical Reports Server (NTRS)
Marston, A. P.
1994-01-01
A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Privacy Program § 701.120 Processing requests that cite or imply PA, Freedom of Information (FOIA), or PA... maximum release of information allowed under the Acts. (d) Processing time limits. DON activities shall... 32 National Defense 5 2010-07-01 2010-07-01 false Processing requests that cite or imply PA...
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Sander, Edward A; Lynch, Kaari A; Boyce, Steven T
2014-05-01
Engineered skin substitutes (ESSs) have been reported to close full-thickness burn wounds but are subject to loss from mechanical shear due to their deficiencies in tensile strength and elasticity. Hypothetically, if the mechanical properties of ESS matched those of native skin, losses due to shear or fracture could be reduced. To consider modifications of the composition of ESS to improve homology with native skin, biomechanical analyses of the current composition of ESS were performed. ESSs consist of a degradable biopolymer scaffold of type I collagen and chondroitin-sulfate (CGS) that is populated sequentially with cultured human dermal fibroblasts (hF) and epidermal keratinocytes (hK). In the current study, the hydrated biopolymer scaffold (CGS), the scaffold populated with hF dermal skin substitute (DSS), or the complete ESS were evaluated mechanically for linear stiffness (N/mm), ultimate tensile load at failure (N), maximum extension at failure (mm), and energy absorbed up to the point of failure (N-mm). These biomechanical end points were also used to evaluate ESS at six weeks after grafting to full-thickness skin wounds in athymic mice and compared to murine autograft or excised murine skin. The data showed statistically significant differences (p <0.05) between ESS in vitro and after grafting for all four structural properties. Grafted ESS differed statistically from murine autograft with respect to maximum extension at failure, and from intact murine skin with respect to linear stiffness and maximum extension. These results demonstrate rapid changes in mechanical properties of ESS after grafting that are comparable to murine autograft. These values provide instruction for improvement of the biomechanical properties of ESS in vitro that may reduce clinical morbidity from graft loss.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
NASA Technical Reports Server (NTRS)
Robinson, Jeffrey S.; Wurster, Kathryn E.
2006-01-01
Recently, NASA's Exploration Systems Research and Technology Project funded several tasks that endeavored to develop and evaluate various thermal protection systems and high temperature material concepts for potential use on the crew exploration vehicle. In support of these tasks, NASA Langley's Vehicle Analysis Branch generated trajectory information and associated aeroheating environments for more than 60 unique entry cases. Using the Apollo Command Module as the baseline entry system because of its relevance to the favored crew exploration vehicle design, trajectories for a range of lunar and Mars return, direct and aerocapture Earth-entry scenarios were developed. For direct entry, a matrix of cases was created that reflects reasonably expected minimum and maximum values of vehicle ballistic coefficient, inertial velocity at entry interface, and inertial flight path angle at entry interface. For aerocapture, trajectories were generated for a range of values of initial velocity and ballistic coefficient that, when combined with proper initial flight path angles, resulted in achieving a low Earth orbit either by employing a full lift vector up or full lift vector down attitude. For each trajectory generated, aeroheating environments were generated which were intended to bound the thermal protection system requirements for likely crew exploration vehicle concepts. The trades examined clearly pointed to a range of missions / concepts that will require ablative systems as well as a range for which reusable systems may be feasible. In addition, the results clearly indicated those entry conditions and modes suitable for manned flight, considering vehicle deceleration levels experienced during entry. This paper presents an overview of the analysis performed, including the assumptions, methods, and general approach used, as well as a summary of the trajectory and aerothermal environment information that was generated.
78 FR 78985 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
...) will publish periodic summaries of proposed projects. To request more information on the proposed... the effects and accomplishments of SAMHSA programs. The following table is an estimated annual... transaction \\1\\ This table represents the maximum additional burden if adult respondents for ATR provide...
12 CFR 1282.15 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
...). (3) Missing data or information. When an Enterprise lacks sufficient data or information to determine.... Mortgage purchases with missing data in excess of the maximum will be included in the denominator and... missing borrower incomes (as determined by FHFA based on the most recent Home Mortgage Disclosure Act data...
NASA Astrophysics Data System (ADS)
Ambekar Ramachandra Rao, Raghu; Mehta, Monal R.; Toussaint, Kimani C., Jr.
2010-02-01
We demonstrate the use of Fourier transform-second-harmonic generation (FT-SHG) imaging of collagen fibers as a means of performing quantitative analysis of obtained images of selected spatial regions in porcine trachea, ear, and cornea. Two quantitative markers, preferred orientation and maximum spatial frequency are proposed for differentiating structural information between various spatial regions of interest in the specimens. The ear shows consistent maximum spatial frequency and orientation as also observed in its real-space image. However, there are observable changes in the orientation and minimum feature size of fibers in the trachea indicating a more random organization. Finally, the analysis is applied to a 3D image stack of the cornea. It is shown that the standard deviation of the orientation is sensitive to the randomness in fiber orientation. Regions with variations in the maximum spatial frequency, but with relatively constant orientation, suggest that maximum spatial frequency is useful as an independent quantitative marker. We emphasize that FT-SHG is a simple, yet powerful, tool for extracting information from images that is not obvious in real space. This technique can be used as a quantitative biomarker to assess the structure of collagen fibers that may change due to damage from disease or physical injury.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorne, N; Kassaee, A
Purpose: To develop an algorithm which can calculate the Full Width Half Maximum (FWHM) of a Proton Pencil Beam from a 2D dimensional ion chamber array (IBA Matrixx) with limited spatial resolution ( 7.6 mm inter chamber distance). The algorithm would allow beam FWHM measurements to be taken during daily QA without an appreciable time increase. Methods: Combinations of 147 MeV single spot beams were delivered onto an IBA Matrixx and concurrently on EBT3 films for a standard. Data were collected around the Bragg Peak region and evaluated by a custom MATLAB script based on our algorithm using a leastmore » squared analysis. A set of artificial data, modified with random noise, was also processed to test for robustness. Results: The Matlab script processed Matixx data shows acceptable agreement (within 5%) with film measurements with no single measurement differing by more than 1.8 mm. In cases where the spots show some degree of asymmetry, the algorithm is able to resolve the differences. The algorithm was able to process artificial data with noise up to 15% of the maximum value. Time assays of each measurement took less than 3 minutes to perform, indicating that such measurements may be efficiently added to daily QA treatment. Conclusion: The developed algorithm can be implemented in daily QA program for Proton Pencil Beam scanning beams (PBS) with Matrixx to extract spot size and position information. The developed algorithm may be extended to small field sizes in photon clinic.« less
Earthquake triggering, Earth's rotation variations, Meton's cycle and torques acting on the Earth.
NASA Astrophysics Data System (ADS)
Ostrihansky, L.
2012-04-01
In contrast to unsuccessful searching (lasting over 150 years) of correlation of earthquakes with biweekly tides the author found correlation of earthquakes with sidereal 13.66 days Earth's rotation variations expressed as the length of a day (LOD) measured daily by the International Earth's Rotation Service. After short mention about earthquakes Denali Fault Alaska 3rd November 2002, M 7.9, triggered on LOD maximum and Great Sumatra earthquake 26th December 2004 triggered on LOD minimum and the full Moon, the main object of this paper are earthquakes of period 2010-VI. 2011: Haiti M 7.0 Jan. 12, 2010 on LOD minimum, Maule Chile M 8.8 Feb. 12, 2010 on LOD maximum, Sumatra and Andaman Sea region 6 earthquakes revealed from 7 on LOD minimum, New Zealand, Christchurch M 7.1 Sep. 9, 2010 on LOD minimum and Christchurch M 6.3 Feb. 21, 2011 on LOD maximum and Japan Near coast of Honshu M 9.1 March 11, 2011 on LOD minimum. I found that LOD minimums coincide with full or new Moon only twice in a year in solstices and also twice in the year with LOD maximums in equinoxes. To prove that determined coincidences of earthquakes and LOD extremes stated above are not accidental events, histograms were constructed of earthquake occurrence and their position on LOD graph deeply in the past, in some cases from the time the IERS started to measure the Earth's rotation variations in 1962. Evaluation of histograms and the Schuster's test has proven that maxima of earthquakes are triggered always in both Earth's rotation deceleration and acceleration. Backward overview of the past earthquakes revealed that the Great Sumatra earthquake Dec. 26, 2004 had its equivalent in the shape of LOD graph, full Moon position, character of aftershocks, 19 years ago in difference only one day of Dec. 27, 1985 M 6.6, proving that not only sidereal 13.66 days variations but also the 19 years Meton's cycle is the period of the earthquakes occurrence.
NASA Astrophysics Data System (ADS)
Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz
2018-04-01
Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.
Variability of maximum systolic amplitude of ΔZ/Δt curve in pregnancy. Perennial observations
NASA Astrophysics Data System (ADS)
Ilyin, I.; Karpov, A.; Korotkova, M.
2010-04-01
Maximum systolic amplitude is quite an important component of the impedance cardiogram ΔZ/Δt curve. Its values make it possible to calculate many hemodynamic indices. Therefore it is necessary to keep informed about monthly, annual and perennial maximum systolic amplitude trend. We can produce the measuring data of the maximum systolic amplitude for a fifteen-year period (from 1994 to 2009). The impedance cardiograms were obtained with the help of an electric impedance analyzer "RA-5" (1 mA, 70 kHz) with disk ECG electrodes. The data analyzed were taken from the pregnant women with non-complicated pregnancy (n=5709). We have analyzed the average monthly and annual changes of the maximum systolic amplitude ΔZ/Δt curve. It allowed us to reveal the six-year periodicity of the maximum systolic amplitude changes. There were discovered statistically significant peak values difference of the amplitude (p>0.001). The data obtained should be taken into consideration when using impedance cardiography in clinical practice. The article is supplied with tables and diagrams.
A likelihood method for measuring the ultrahigh energy cosmic ray composition
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Belov, K.; Blake, S. A.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2006-08-01
Air fluorescence detectors traditionally determine the dominant chemical composition of the ultrahigh energy cosmic ray flux by comparing the averaged slant depth of the shower maximum, Xmax, as a function of energy to the slant depths expected for various hypothesized primaries. In this paper, we present a method to make a direct measurement of the expected mean number of protons and iron by comparing the shapes of the expected Xmax distributions to the distribution for data. The advantages of this method includes the use of information of the full distribution and its ability to calculate a flux for various cosmic ray compositions. The same method can be expanded to marginalize uncertainties due to choice of spectra, hadronic models and atmospheric parameters. We demonstrate the technique with independent simulated data samples from a parent sample of protons and iron. We accurately predict the number of protons and iron in the parent sample and show that the uncertainties are meaningful.
NASA Astrophysics Data System (ADS)
Negahdar, Mohammadreza; Zacarias, Albert; Milam, Rebecca A.; Dunlap, Neal; Woo, Shiao Y.; Amini, Amir A.
2012-03-01
The treatment plan evaluation for lung cancer patients involves pre-treatment and post-treatment volume CT imaging of the lung. However, treatment of the tumor volume lung results in structural changes to the lung during the course of treatment. In order to register the pre-treatment volume to post-treatment volume, there is a need to find robust and homologous features which are not affected by the radiation treatment along with a smooth deformation field. Since airways are well-distributed in the entire lung, in this paper, we propose use of airway tree bifurcations for registration of the pre-treatment volume to the post-treatment volume. A dedicated and automated algorithm has been developed that finds corresponding airway bifurcations in both images. To derive the 3-D deformation field, a B-spline transformation model guided by mutual information similarity metric was used to guarantee the smoothness of the transformation while combining global information from bifurcation points. Therefore, the approach combines both global statistical intensity information with local image feature information. Since during normal breathing, the lung undergoes large nonlinear deformations, it is expected that the proposed method would also be applicable to large deformation registration between maximum inhale and maximum exhale images in the same subject. The method has been evaluated by registering 3-D CT volumes at maximum exhale data to all the other temporal volumes in the POPI-model data.
Electro-optical full-adder/full-subtractor based on graphene-silicon switches
NASA Astrophysics Data System (ADS)
Zivarian, Hossein; Zarifkar, Abbas; Miri, Mehdi
2018-01-01
A compact footprint, low-power consumption, and high-speed operation electro-optical full-adder/full-subtractor based on graphene-silicon electro-optical switches is demonstrated. Each switch consists of a Mach-Zehnder interferometer in which few-layer graphene is embedded in a silicon slot waveguide to construct phase shifters. The presented structure can be used as full-adder and full-subtractor simultaneously. The analysis of various factors such as extinction ratio, power consumption, and operation speed has been presented. As will be shown, the proposed electro-optical switch has a minimum extinction ratio of 36.21 dB, maximum insertion loss about 0.18 dB, high operation speed of 180 GHz, and is able to work with a low applied voltage about 1.4 V. Also, the extinction ratio and insertion loss of the full-adder/full-subtractor are about 30 and 1.5 dB, respectively, for transfer electric modes at telecommunication wavelength of 1.55 μm.
Griebeler, Eva Maria; Klein, Nicole; Sander, P. Martin
2013-01-01
Information on aging, maturation, and growth is important for understanding life histories of organisms. In extinct dinosaurs, such information can be derived from the histological growth record preserved in the mid-shaft cortex of long bones. Here, we construct growth models to estimate ages at death, ages at sexual maturity, ages at which individuals were fully-grown, and maximum growth rates from the growth record preserved in long bones of six sauropod dinosaur individuals (one indeterminate mamenchisaurid, two Apatosaurus sp., two indeterminate diplodocids, and one Camarasaurus sp.) and one basal sauropodomorph dinosaur individual (Plateosaurus engelhardti). Using these estimates, we establish allometries between body mass and each of these traits and compare these to extant taxa. Growth models considered for each dinosaur individual were the von Bertalanffy model, the Gompertz model, and the logistic model (LGM), all of which have inherently fixed inflection points, and the Chapman-Richards model in which the point is not fixed. We use the arithmetic mean of the age at the inflection point and of the age at which 90% of asymptotic mass is reached to assess respectively the age at sexual maturity or the age at onset of reproduction, because unambiguous indicators of maturity in Sauropodomorpha are lacking. According to an AIC-based model selection process, the LGM was the best model for our sauropodomorph sample. Allometries established are consistent with literature data on other Sauropodomorpha. All Sauropodomorpha reached full size within a time span similar to scaled-up modern mammalian megaherbivores and had similar maximum growth rates to scaled-up modern megaherbivores and ratites, but growth rates of Sauropodomorpha were lower than of an average mammal. Sauropodomorph ages at death probably were lower than that of average scaled-up ratites and megaherbivores. Sauropodomorpha were older at maturation than scaled-up ratites and average mammals, but younger than scaled-up megaherbivores. PMID:23840575
Griebeler, Eva Maria; Klein, Nicole; Sander, P Martin
2013-01-01
Information on aging, maturation, and growth is important for understanding life histories of organisms. In extinct dinosaurs, such information can be derived from the histological growth record preserved in the mid-shaft cortex of long bones. Here, we construct growth models to estimate ages at death, ages at sexual maturity, ages at which individuals were fully-grown, and maximum growth rates from the growth record preserved in long bones of six sauropod dinosaur individuals (one indeterminate mamenchisaurid, two Apatosaurus sp., two indeterminate diplodocids, and one Camarasaurus sp.) and one basal sauropodomorph dinosaur individual (Plateosaurus engelhardti). Using these estimates, we establish allometries between body mass and each of these traits and compare these to extant taxa. Growth models considered for each dinosaur individual were the von Bertalanffy model, the Gompertz model, and the logistic model (LGM), all of which have inherently fixed inflection points, and the Chapman-Richards model in which the point is not fixed. We use the arithmetic mean of the age at the inflection point and of the age at which 90% of asymptotic mass is reached to assess respectively the age at sexual maturity or the age at onset of reproduction, because unambiguous indicators of maturity in Sauropodomorpha are lacking. According to an AIC-based model selection process, the LGM was the best model for our sauropodomorph sample. Allometries established are consistent with literature data on other Sauropodomorpha. All Sauropodomorpha reached full size within a time span similar to scaled-up modern mammalian megaherbivores and had similar maximum growth rates to scaled-up modern megaherbivores and ratites, but growth rates of Sauropodomorpha were lower than of an average mammal. Sauropodomorph ages at death probably were lower than that of average scaled-up ratites and megaherbivores. Sauropodomorpha were older at maturation than scaled-up ratites and average mammals, but younger than scaled-up megaherbivores.
NASA Astrophysics Data System (ADS)
Adewole, E. O.; Healy, D.
2017-03-01
Accurate information on fault networks, the full stress tensor, and pore fluid pressures are required for quantifying the stability of structure-bound hydrocarbon prospects, carbon dioxide sequestration, and drilling prolific and safe wells, particularly fluid injections wells. Such information also provides essential data for a proper understanding of superinduced seismicities associated with areas of intensive hydrocarbon exploration and solid minerals mining activities. Pressure and stress data constrained from wells and seismic data in the Northern Niger Delta Basin (NNDB), Nigeria, have been analysed in the framework of fault stability indices by varying the maximum horizontal stress direction from 0° to 90°, evaluated at depths of 2 km, 3.5 km and 4 km. We have used fault dips and azimuths interpreted from high resolution 3D seismic data to calculate the predisposition of faults to failures in three faulting regimes (normal, pseudo-strike-slip and pseudo-thrust). The weighty decrease in the fault stability at 3.5 km depth from 1.2 MPa to 0.55 MPa demonstrates a reduction of the fault strength by high magnitude overpressures. Pore fluid pressures > 50 MPa have tendencies to increase the risk of faults to failure in the study area. Statistical analysis of stability indices (SI) indicates faults dipping 50°-60°, 80°-90°, and azimuths ranging 100°-110° are most favourably oriented for failure to take place, and thus likely to favour migrations of fluids given appropriate pressure and stress conditions in the dominant normal faulting regime of the NNDB. A few of the locally assessed stability of faults show varying results across faulting regimes. However, the near similarities of some model-based results in the faulting regimes explain the stability of subsurface structures are greatly influenced by the maximum horizontal stress (SHmax) direction and magnitude of pore fluid pressures.
43 CFR 418.13 - Maximum allowable limits.
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF THE INTERIOR OPERATING CRITERIA AND PROCEDURES FOR THE NEWLANDS RECLAMATION PROJECT, NEVADA... 308,319 acre-feet for the 1995 Example. The sample MAD corresponds to a system efficiency for full... Expected Project Distribution System Efficiency shows the target efficiencies which will be used over the...
40 CFR 35.408 - Award limitations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... management of a substantial portion of the State's construction grants program. The maximum amount of permit... Regional Administrator allows for full funding of the management of the construction grant program under... Award limitations. The Regional Administrator will not award section 205(g) funds: (a) For construction...
40 CFR 35.408 - Award limitations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... management of a substantial portion of the State's construction grants program. The maximum amount of permit... Regional Administrator allows for full funding of the management of the construction grant program under... Award limitations. The Regional Administrator will not award section 205(g) funds: (a) For construction...
40 CFR 35.408 - Award limitations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... management of a substantial portion of the State's construction grants program. The maximum amount of permit... Regional Administrator allows for full funding of the management of the construction grant program under... Award limitations. The Regional Administrator will not award section 205(g) funds: (a) For construction...
Survey of Radiographic Requirements and Techniques.
ERIC Educational Resources Information Center
Farman, Allan G.; Shawkat, Abdul H.
1981-01-01
A survey of dental schools revealed little standardization of student requirements for dental radiography in the United States. There was a high degree of variability as to what constituted a full radiographic survey, which has implications concerning the maximum limits to patient exposure to radiation. (Author/MLW)
43 CFR 418.13 - Maximum allowable limits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... OF THE INTERIOR OPERATING CRITERIA AND PROCEDURES FOR THE NEWLANDS RECLAMATION PROJECT, NEVADA... 308,319 acre-feet for the 1995 Example. The sample MAD corresponds to a system efficiency for full... Expected Project Distribution System Efficiency shows the target efficiencies which will be used over the...
Italian Lifelong Learning in Europe: Notes to the Second Millennium
ERIC Educational Resources Information Center
Savelli, Simona
2014-01-01
In Italy the "Education and Training System" is undergoing a period of complex reorganization. This occurs within a "European Integration Process" aimed at achieving a full "Citizenship right" and the maximum mobility among "Member States". Starting from the latest "European Directives", which make…
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
X-ray observations of the burst source MXB 1728 - 34
NASA Technical Reports Server (NTRS)
Basinska, E. M.; Lewin, W. H. G.; Sztajno, M.; Cominsky, L. R.; Marshall, F. J.
1984-01-01
Where sufficient information has been obtained, attention is given to the maximum burst flux, integrated burst flux, spectral hardness, rise time, etc., of 96 X-ray bursts observed from March 1976 to March 1979. The integrated burst flux and the burst frequency appear to be correlated; the longer the burst interval, the larger the integrated burst flux, as expected on the basis of simple thermonuclear flash models. The maximum burst flux and the integrated burst flux are strongly correlated; for low flux levels their dependence is approximately linear, while for increasing values of the integrated burst flux, the flux at burst maximum saturates and reaches a plateau.
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
40 CFR 57.402 - Elements of the supplementary control system.
Code of Federal Regulations, 2013 CFR
2013-07-01
... capable of routine real time measurement of maximum expected SO2 concentrations for the averaging times of... emission curtailment decisions based on the use of real time information from the air quality monitoring... meteorological information necessary to operate the system; (iv) The ambient concentrations and meteorological...
40 CFR 57.402 - Elements of the supplementary control system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... capable of routine real time measurement of maximum expected SO2 concentrations for the averaging times of... emission curtailment decisions based on the use of real time information from the air quality monitoring... meteorological information necessary to operate the system; (iv) The ambient concentrations and meteorological...
40 CFR 57.402 - Elements of the supplementary control system.
Code of Federal Regulations, 2012 CFR
2012-07-01
... capable of routine real time measurement of maximum expected SO2 concentrations for the averaging times of... emission curtailment decisions based on the use of real time information from the air quality monitoring... meteorological information necessary to operate the system; (iv) The ambient concentrations and meteorological...
40 CFR 57.402 - Elements of the supplementary control system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... capable of routine real time measurement of maximum expected SO2 concentrations for the averaging times of... emission curtailment decisions based on the use of real time information from the air quality monitoring... meteorological information necessary to operate the system; (iv) The ambient concentrations and meteorological...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
... . Please refer to ``OMB Control No. 2900- 0386.'' SUPPLEMENTARY INFORMATION: Title: Interest Rate Reduction... guaranty on all interest rate reduction refinancing loan and provide a receipt as proof that the funding... ensure lenders computed the funding fee and the maximum permissible loan amount for interest rate...
76 FR 66169 - Office of Advocacy and Outreach Federal Financial Assistance Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
...) Information about how to obtain proposal forms and the instructions for completing such forms. (14...) Regulatory information. (8) Definitions. (9) Minimum and maximum budget requests and whether proposals... Content of a proposal. The RFP provides instructions on how to access a funding opportunity. The funding...
76 FR 12980 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
...) will publish periodic summaries of proposed projects. To request more information on the proposed... allowing SAMHSA to quantify the effects and accomplishments of SAMHSA programs. The following table is an...,000 .03 2,400 $18.40 $44,160 \\1\\ This table represents the maximum additional burden if adult...
Communicating research results
Jan Fryk
1999-01-01
A research finding is of little value until it is known and applied. Hence) communication of results should be regarded as a natural, integrated part of research) and thus addressed in the research plans from the very beginning. A clearly defined information strategy and operational goals for information activities are needed for successful communication. For maximum...
77 FR 27777 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... include the bundling of separately billed drugs, clinical laboratory tests, and other items ``to maximum... the estimated burden; (3) ways to enhance the quality, utility, and clarity of the information to be... Quality Incentive Program (QIP); Use: The Medicare Prescription Drug Improvement, and Modernization Act of...
Marketing and Distributive Education. Wholesaling Curriculum Guide.
ERIC Educational Resources Information Center
Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.
This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…
Marketing and Distributive Education. Food Marketing Curriculum Guide
ERIC Educational Resources Information Center
Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.
This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…
ERIC Educational Resources Information Center
Van Rooy, Wilhelmina S.
2012-01-01
Background: The ubiquity, availability and exponential growth of digital information and communication technology (ICT) creates unique opportunities for learning and teaching in the senior secondary school biology curriculum. Digital technologies make it possible for emerging disciplinary knowledge and understanding of biological processes…
76 FR 23326 - Intent To Request Renewal From OMB of One Current Public Collection of Information...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-26
... DEPARTMENT OF HOMELAND SECURITY Transportation Security Administration [Docket No. TSA-2006-24191... Notice. SUMMARY: The Transportation Security Administration (TSA) invites public comment on one currently... approved the collection of information for six months and TSA now seeks the maximum three-year approval...
40 CFR 57.402 - Elements of the supplementary control system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... capable of routine real time measurement of maximum expected SO2 concentrations for the averaging times of... emission curtailment decisions based on the use of real time information from the air quality monitoring... meteorological information necessary to operate the system; (iv) The ambient concentrations and meteorological...
2014-01-01
Background The Italian code of medical deontology recently approved stipulates that physicians have the duty to inform the patient of each unwanted event and its causes, and to identify, report and evaluate adverse events and errors. Thus the obligation to supply information continues to widen, in some way extending beyond the doctor-patient relationship to become an essential tool for improving the quality of professional services. Discussion The new deontological precepts intersect two areas in which the figure of the physician is paramount. On the one hand is the need for maximum integrity towards the patient, in the name of the doctor’s own, and the other’s (the patient’s) dignity and liberty; on the other is the physician’s developing role in the strategies of the health system to achieve efficacy, quality, reliability and efficiency, to reduce errors and adverse events and to manage clinical risk. Summary In Italy, due to guidelines issued by the Ministry of Health and to the new code of medical deontology, the role of physicians becomes a part of a complex strategy of risk management based on a system focused approach in which increasing transparency regarding adverse outcomes and full disclosure of health- related negative events represent a key factor. PMID:25023339
Turillazzi, Emanuela; Neri, Margherita
2014-07-15
The Italian code of medical deontology recently approved stipulates that physicians have the duty to inform the patient of each unwanted event and its causes, and to identify, report and evaluate adverse events and errors. Thus the obligation to supply information continues to widen, in some way extending beyond the doctor-patient relationship to become an essential tool for improving the quality of professional services. The new deontological precepts intersect two areas in which the figure of the physician is paramount. On the one hand is the need for maximum integrity towards the patient, in the name of the doctor's own, and the other's (the patient's) dignity and liberty; on the other is the physician's developing role in the strategies of the health system to achieve efficacy, quality, reliability and efficiency, to reduce errors and adverse events and to manage clinical risk. In Italy, due to guidelines issued by the Ministry of Health and to the new code of medical deontology, the role of physicians becomes a part of a complex strategy of risk management based on a system focused approach in which increasing transparency regarding adverse outcomes and full disclosure of health- related negative events represent a key factor.
Procedures to develop a computerized adaptive test to assess patient-reported physical functioning.
McCabe, Erin; Gross, Douglas P; Bulut, Okan
2018-06-07
The purpose of this paper is to demonstrate the procedures to develop and implement a computerized adaptive patient-reported outcome (PRO) measure using secondary analysis of a dataset and items from fixed-format legacy measures. We conducted secondary analysis of a dataset of responses from 1429 persons with work-related lower extremity impairment. We calibrated three measures of physical functioning on the same metric, based on item response theory (IRT). We evaluated efficiency and measurement precision of various computerized adaptive test (CAT) designs using computer simulations. IRT and confirmatory factor analyses support combining the items from the three scales for a CAT item bank of 31 items. The item parameters for IRT were calculated using the generalized partial credit model. CAT simulations show that reducing the test length from the full 31 items to a maximum test length of 8 items, or 20 items is possible without a significant loss of information (95, 99% correlation with legacy measure scores). We demonstrated feasibility and efficiency of using CAT for PRO measurement of physical functioning. The procedures we outlined are straightforward, and can be applied to other PRO measures. Additionally, we have included all the information necessary to implement the CAT of physical functioning in the electronic supplementary material of this paper.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
NASA Technical Reports Server (NTRS)
Lovell, J Calvin; Wilson, Herbert A JR
1947-01-01
An investigation of the DM-1 Glider, which had approximately triangular plan form, an aspect ratio of 1.8 and a 60 degree sweptback leading edge, has been conducted in the Langley full-scale tunnel. The investigation consisted of the determination of the separate effects of the following modifications made to the glider on its maximum lift and stability characteristics: (a) installation of sharp leading edges over the inboard semispan of the wing, (b) removal of the vertical fin, (c) sealing of the elevon control-balance slots, (d) installation of redesigned thin vertical surfaces, (e) installation of faired sharp leading edges, and (f) installation of canopy. The maximum lift coefficient of the DM-1 glider was increased from 0.61 to 1.01 by the installation of semispan sharp leading edges, and from 1.01 to 1.24 by the removal of the vertical fin and sealing of the elevon control-balance slots. The highest maximum lift coefficient (1.32) was obtained when the faired sharp leading edges and the thin vertical surfaces were attached to the glider. The original DM-1 glider was longitudinally stable. The semispan sharp leading edges shifted the neutral point forward approximately 3 percent of the root chord at moderate lift coefficients, and the glider configuration with these sharp leading edges attached was longitudinally unstable, for the assumed center-of-gravity location, at lift coefficients above 0.73. Sealing the elevon control-balance slots and installing the faired sharp leading edges, the thin vertical surfaces, and the canopy shifted the neutral point forward approximately 8 percent of the root chord.
González-Suárez, Ana; Pérez, Juan J; Berjano, Enrique
2018-04-20
Although accurate modeling of the thermal performance of irrigated-tip electrodes in radiofrequency cardiac ablation requires the solution of a triple coupled problem involving simultaneous electrical conduction, heat transfer, and fluid dynamics, in certain cases it is difficult to combine the software with the expertise necessary to solve these coupled problems, so that reduced models have to be considered. We here focus on a reduced model which avoids the fluid dynamics problem by setting a constant temperature at the electrode tip. Our aim was to compare the reduced and full models in terms of predicting lesion dimensions and the temperatures reached in tissue and blood. The results showed that the reduced model overestimates the lesion surface width by up to 5 mm (i.e. 70%) for any electrode insertion depth and blood flow rate. Likewise, it drastically overestimates the maximum blood temperature by more than 15 °C in all cases. However, the reduced model is able to predict lesion depth reasonably well (within 0.1 mm of the full model), and also the maximum tissue temperature (difference always less than 3 °C). These results were valid throughout the entire ablation time (60 s) and regardless of blood flow rate and electrode insertion depth (ranging from 0.5 to 1.5 mm). The findings suggest that the reduced model is not able to predict either the lesion surface width or the maximum temperature reached in the blood, and so would not be suitable for the study of issues related to blood temperature, such as the incidence of thrombus formation during ablation. However, it could be used to study issues related to maximum tissue temperature, such as the steam pop phenomenon.
NASA Astrophysics Data System (ADS)
Langowski, M. P.; von Savigny, C.; Burrows, J. P.; Rozanov, V. V.; Dunker, T.; Hoppe, U.-P.; Sinnhuber, M.; Aikin, A. C.
2015-07-01
An algorithm has been developed for the retrieval of sodium atom (Na) number density on a latitude and altitude grid from SCIAMACHY limb measurements of the Na resonance fluorescence. The results are obtained between 50 and 150 km altitude and the resulting global seasonal variations of Na are analysed. The retrieval approach is adapted from that used for the retrieval of magnesium atom (Mg) and magnesium ion (Mg+) number density profiles recently reported by Langowski et al. (2014). Monthly mean values of Na are presented as a function of altitude and latitude. This data set was retrieved from the 4 years of spectroscopic limb data of the SCIAMACHY mesosphere and lower thermosphere (MLT) measurement mode. The Na layer has a nearly constant altitude of 90-93 km for all latitudes and seasons, and has a full width at half maximum of 5-15 km. Small but substantial seasonal variations in Na are identified for latitudes less than 40°, where the maximum Na number densities are 3000-4000 atoms cm-3. At mid to high latitudes a clear seasonal variation with a winter maximum of up to 6000 atoms cm-3 is observed. The high latitudes, which are only measured in the Summer Hemisphere, have lower number densities with peak densities being approximately 1000 Na atoms cm-3. The full width at half maximum of the peak varies strongly at high latitudes and is 5 km near the polar summer mesopause, while it exceeds 10 km at lower latitudes. In summer the Na atom concentration at high latitudes and at altitudes below 88 km is significantly smaller than that at mid latitudes. The results are compared with other observations and models and there is overall a good agreement with these.
Badoer, S; Miana, P; Della Sala, S; Marchiori, G; Tandoi, V; Di Pippo, F
2015-12-01
In this study, monthly variations in biomass of ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) were analysed over a 1-year period by fluorescence in situ hybridization (FISH) at the full-scale Fusina WWTP. The nitrification capacity of the plant was also monitored using periodic respirometric batch tests and by an automated on-line titrimetric instrument (TITrimetric Automated ANalyser). The percentage of nitrifying bacteria in the plant was the highest in summer and was in the range of 10-15 % of the active biomass. The maximum nitrosation rate varied in the range 2.0-4.0 mg NH4 g(-1) VSS h(-1) (0.048-0.096 kg TKN kg(-1) VSS day(-1)): values obtained by laboratory measurements and the on-line instrument were similar and significantly correlated. The activity measurements provided a valuable tool for estimating the maximum total Kjeldahl nitrogen (TKN) loading possible at the plant and provided an early warning of whether the TKN was approaching its limiting value. The FISH analysis permitted determination of the nitrifying biomass present. The main operational parameter affecting both the population dynamics and the maximum nitrosation activity was mixed liquor volatile suspended solids (MLVSS) concentration and was negatively correlated with ammonia-oxidizing bacteria (AOB) (p = 0.029) and (NOB) (p = 0.01) abundances and positively correlated with maximum nitrosation rates (p = 0.035). Increases in concentrations led to decreases in nitrifying bacteria abundance, but their nitrosation activity was higher. These results demonstrate the importance of MLVSS concentration as key factor in the development and activity of nitrifying communities in wastewater treatment plants (WWTPs). Operational data on VSS and sludge volume index (SVI) values are also presented on 11-year basis observations.
Human Tolerance to Rapidly Applied Accelerations: A Summary of the Literature
NASA Technical Reports Server (NTRS)
Eiband, A. Martin
1959-01-01
The literature is surveyed to determine human tolerance to rapidly applied accelerations. Pertinent human and animal experiments applicable to space flight and to crash impact forces are analyzed and discussed. These data are compared and presented on the basis of a trapezoidal pulse. The effects of body restraint and of acceleration direction, onset rate, and plateau duration on the maximum tolerable and survivable rapidly applied accelerations are shown. Results of the survey indicate that adequate torso and extremity restraint is the primary variable in tolerance to rapidly applied accelerations. The harness, or restraint system, must be arranged to transmit the major portion of the accelerating force directly to the pelvic structure and not via the vertebral column. When the conditions of adequate restraint have been met, then the other variables, direction, magnitude, and onset rate of rapidly applied accelerations, govern maximum tolerance and injury limits. The results also indicate that adequately stressed aft-faced passenger seats offer maximum complete body support with minimum objectionable harnessing. Such a seat, whether designed for 20-, 30-, or 40-G dynamic loading, would include lap strap, chest (axillary) strap, and winged-back seat to increase headward and lateral G protection, full-height integral head rest, arm rests (load-bearing) with recessed hand-holds and provisions to prevent arms from slipping either laterally or beyond the seat back, and leg support to keep the legs from being wedged under the seat. For crew members and others whose duties require forward-facing seats, maximum complete body support requires lap, shoulder, and thigh straps, lap-belt tie-down strap, and full-height seat back with integral head support.
Propagation of Statistical Noise Through a Two-Qubit Maximum Likelihood Tomography
2018-04-01
University Daniel E Jones, Brian T Kirby, and Michael Brodsky Computational and Information Sciences Directorate, ARL Approved for...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this burden estimate or any other
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
The video fluorescent device for diagnostics of cancer of human reproductive system
NASA Astrophysics Data System (ADS)
Brysin, Nickolay N.; Linkov, Kirill G.; Stratonnikov, Alexander A.; Savelieva, Tatiana A.; Loschenov, Victor B.
2008-06-01
Photodynamic therapy (PDT) is one of the advanced methods of treatment of skin cancer and surfaces of internal organs. The basic advantages of PDT are high efficiency and low cost of treatment. PDT technique is needed for providing fluorescent diagnostics. Laser-based systems are widely applied to the fluorescence excitations for diagnostic because of a narrow spectrum of fluorescence excitation and high density of radiation. Application of laser systems for carrying out fluorescent diagnostics gives the image of a tumor distorted by speckles that does not give an opportunity to obtain full information about the form of a tumor quickly. Besides, these laser excitation systems have complicated structure and high cost. As a base for the development and creation of a video fluorescent device one of commercially produced colposcopes was chosen. It allows to decrease cost of the device, and also has enabled to make modernization for already used colposcopes. A LED-based light source was offered to be used for fluorescence excitation in this work. The maximum in a spectrum of radiation of LEDs corresponds to the general spectral maximum of protoporphyrin IX (PPIX) absorption. Irradiance in the center of a light spot is 31 mW/cm2. The receiving optical system of the fluorescent channel is adjusted at 635 nm where a general spectral maximum of fluorescence PPIX is located. Also the device contains a RGB video channel, a white light source and a USB spectrometer LESA-01-BIOSPEC, for measurement of spectra of fluorescence and diffusion reflections in treatment area. The software is developed for maintenance of the device. Some studies on laboratory animals were made. As a result, areas with the increased concentration of a PPIX were correctly detected. At present, the device is used for diagnostics of cancer of female reproductive system in Research Centre for Obstetrics, Gynecology and Perinatology of the Russian Academy of Medical Sciences (Moscow, Russia).
Undisclosed conflicts of interest among biomedical textbook authors.
Piper, Brian J; Lambert, Drew A; Keefe, Ryan C; Smukler, Phoebe U; Selemon, Nicolas A; Duperry, Zachary R
2018-02-05
Textbooks are a formative resource for health care providers during their education and are also an enduring reference for pathophysiology and treatment. Unlike the primary literature and clinical guidelines, biomedical textbook authors do not typically disclose potential financial conflicts of interest (pCoIs). The objective of this study was to evaluate whether the authors of textbooks used in the training of physicians, pharmacists, and dentists had appreciable undisclosed pCoIs in the form of patents or compensation received from pharmaceutical or biotechnology companies. The most recent editions of six medical textbooks, Harrison's Principles of Internal Medicine ( Har PIM), Katzung and Trevor's Basic and Clinical Pharmacology ( Kat BCP), the American Osteopathic Association's Foundations of Osteopathic Medicine ( AOA FOM), Remington: The Science and Practice of Pharmacy ( Rem SPP), Koda-Kimble and Young's Applied Therapeutics ( KKY AT), and Yagiela's Pharmacology and Therapeutics for Dentistry ( Yag PTD), were selected after consulting biomedical educators for evaluation. Author names (N = 1,152, 29.2% female) were submitted to databases to examine patents (Google Scholar) and compensation (ProPublica's Dollars for Docs [PDD]). Authors were listed as inventors on 677 patents (maximum/author = 23), with three-quarters (74.9%) to Har PIM authors. Females were significantly underrepresented among patent holders. The PDD 2009-2013 database revealed receipt of US$13.2 million, the majority to (83.9%) to Har PIM. The maximum compensation per author was $869,413. The PDD 2014 database identified receipt of $6.8 million, with 50.4% of eligible authors receiving compensation. The maximum compensation received by a single author was $560,021. Cardiovascular authors were most likely to have a PDD entry and neurologic disorders authors were least likely. An appreciable subset of biomedical authors have patents and have received remuneration from medical product companies and this information is not disclosed to readers. These findings indicate that full transparency of financial pCoI should become a standard practice among the authors of biomedical educational materials.
14 CFR 27.87 - Height-speed envelope.
Code of Federal Regulations, 2012 CFR
2012-01-01
... selected by the applicant for each altitude covered by paragraph (a)(1) of this section. For helicopters...— (1) For single-engine helicopters, full autorotation; (2) For multiengine helicopters, OEI (where... altitude or the maximum altitude capability of the helicopter, whichever is less, and (3) For other...
14 CFR 27.87 - Height-speed envelope.
Code of Federal Regulations, 2014 CFR
2014-01-01
... selected by the applicant for each altitude covered by paragraph (a)(1) of this section. For helicopters...— (1) For single-engine helicopters, full autorotation; (2) For multiengine helicopters, OEI (where... altitude or the maximum altitude capability of the helicopter, whichever is less, and (3) For other...
14 CFR 27.87 - Height-speed envelope.
Code of Federal Regulations, 2013 CFR
2013-01-01
... selected by the applicant for each altitude covered by paragraph (a)(1) of this section. For helicopters...— (1) For single-engine helicopters, full autorotation; (2) For multiengine helicopters, OEI (where... altitude or the maximum altitude capability of the helicopter, whichever is less, and (3) For other...
15 CFR 930.32 - Consistent to the maximum extent practicable.
Code of Federal Regulations, 2010 CFR
2010-01-01
... COASTAL RESOURCE MANAGEMENT FEDERAL CONSISTENCY WITH APPROVED COASTAL MANAGEMENT PROGRAMS Consistency for... management programs unless full consistency is prohibited by existing law applicable to the Federal agency. (2) Section 307(e) of the Act does not relieve Federal agencies of the consistency requirements under...
Continuous depth-of-interaction encoding using phosphor-coated scintillators.
Du, Huini; Yang, Yongfeng; Glodo, Jarek; Wu, Yibao; Shah, Kanai; Cherry, Simon R
2009-03-21
We investigate a novel detector using a lutetium oxyorthosilicate (LSO) scintillator and YGG (yttrium-aluminum-gallium oxide:cerium, Y(3)(Al,Ga)(5)O(12):Ce) phosphor to construct a detector with continuous depth-of-interaction (DOI) information. The far end of the LSO scintillator is coated with a thin layer of YGG phosphor powder which absorbs some fraction of the LSO scintillation light and emits wavelength-shifted photons with a characteristic decay time of approximately 50 ns. The near end of the LSO scintillator is directly coupled to a photodetector. The photodetector detects a mixture of the LSO light and the light emitted by YGG. With appropriate placement of the coating, the ratio of the light converted from the YGG coating with respect to the unconverted LSO light can be made to depend on the interaction depth. DOI information can then be estimated by inspecting the overall light pulse decay time. Experiments were conducted to optimize the coating method. 19 ns decay time differences across the length of the detector were achieved experimentally when reading out a 1.5 x 1.5 x 20 mm(3) LSO crystal with unpolished surfaces and half-coated with YGG phosphor. The same coating scheme was applied to a 4 x 4 LSO array. Pulse shape discrimination (PSD) methods were studied to extract DOI information from the pulse shape changes. The DOI full-width-half-maximum (FWHM) resolution was found to be approximately 8 mm for this 2 cm thick array.
Continuous Depth-of-Interaction Encoding Using Phosphor-Coated Scintillators
Du, Huini; Yang, Yongfeng; Glodo, Jarek; Wu, Yibao; Shah, Kanai; Cherry, Simon R.
2009-01-01
We investigate a novel detector using lutetium oxyorthosilicate (LSO) scintillator and YGG (yttrium aluminum gallium oxide:cerium, Y3(Al,Ga)5O12:Ce) phosphor to construct a detector with continuous depth-of-interaction (DOI) information. The far end of the LSO scintillator is coated with a thin layer of YGG phosphor powder which absorbs some fraction of the LSO scintillation light and emits wavelength-shifted photons with a characteristic decay time of ∼ 50 ns. The near end of the LSO scintillator is directly coupled to a photodetector. The photodetector detects a mixture of the LSO light and the light emitted by YGG. With appropriate placement of the coating, the ratio of the light converted from the YGG coating with respect to the unconverted LSO light can be made to depend on the interaction depth. DOI information can then be estimated by inspecting the overall light pulse decay time. Experiments were conducted to optimize the coating method. 19 ns decay time differences across the length of the detector were achieved experimentally when reading out a 1.5×1.5×20 mm3 LSO crystal with unpolished surfaces and half-coated with YGG phosphor. The same coating scheme was applied to a 4 by 4 LSO array. Pulse shape discrimination (PSD) methods were studied to extract DOI information from the pulse shape changes. The DOI full-width-half-maximum (FWHM) resolution was found to be ∼8 mm for this 2 cm thick array. PMID:19258685
A white organic light emitting diode based on anthracene-triphenylamine derivatives
NASA Astrophysics Data System (ADS)
Jiang, Quan; Qu, Jianjun; Yu, Junsheng; Tao, Silu; Gan, Yuanyuan; Jiang, Yadong
2010-10-01
White organic lighting-diode (WOLED) can be used as flat light sources, backlights for liquid crystal displays and full color displays. Recently, a research mainstream of white OLED is to develop the novel materials and optimize the structure of devices. In this work a WOLED with a structure of ITO/NPB/PAA/Alq3: x% rubrene/Alq3/Mg: Ag, was fabricated. The device has two light-emitting layers. NPB is used as a hole transport layer, PAA as a blue emitting layer, Alq3: rubrene host-guest system as a yellow emitting layer, and Alq3 close to the cathode as an electron transport layer. In the experiment, the doping concentration of rubrene was optimized. WOLED 1 with 4% rubrene achieved a maximum luminous efficiency of 1.80 lm/W, a maximum luminance of 3926 cd/m2 and CIE coordinates of (0.374, 0.341) .WOLED 2 with 2% rubrene achieved a maximum luminous efficiency of 0.65 lm/W, a maximum luminance of 7495cd/m2 and CIE coordinates of (0.365,0.365).
Information on the Lake Tahoe watershed, EPA's protection efforts, water quality issues, effects of climate change, Lake Tahoe Total Maximum Daily Load (TMDL), EPA-sponsored projects, list of partner agencies.
ERIC Educational Resources Information Center
Kibirige, Harry M.
Information has become an increasingly valuable commodity. Access to full-text information, containing text, images, and in some cases, sound, is becoming vital to decision-making for organizations as well as individuals. The book covers the following topics: (1) the information marketplace in a cyberculture; (2) the telecommunications foundation…
Optimization of Wireless Transceivers under Processing Energy Constraints
NASA Astrophysics Data System (ADS)
Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert
2017-09-01
Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.
Sevanto, Sanna [Los Alamos National Laboratory; Dickman, Turin L. [Los Alamos National Laboratory; Collins, Adam [Los Alamos National Laboratory; Grossiord, Charlotte [Swiss Federal Institute for Forest Snow and Landscape Research; Adams, Henry [Oklahoma State University; Borrego, Isaac [USGS Southwest Biological Science Center; McDowell, Nate [Pacific Northwest National Laboratory (PNNL); Powers, Heath [Los Alamos National Laboratory; Stockton, Elizabeth [University of New Mexico; Ryan, Max [Los Alamos National Laboratory; Slentz, Matthew [Mohle Adams; Briggs, Sam [Fossil Creek Nursery; McBranch, Natalie [Los Alamos National Laboratory; Morgan, Bryn [Los Alamos National Laboratory
2018-01-01
The Los Alamos Survival–Mortality experiment (SUMO) is located on Frijoles Mesa near Los Alamos, New Mexico, USA, at an elevation of 2150 m. This was a tree manipulation study that investigated the relative impacts of drought and warming on plant function and reveals how trees adapt to drought and heat in semi-arid regions. The study factored the role of tree hydraulic acclimation to both precipitation and temperature and separated their effects.The experiment is located in a pinon-juniper woodland near the ponderosa pine (Pinus ponderosa) forest ecotone. Maximum assimilation rate measured monthly for each target tree. See SUMO Target Tree Information data package (doi:10.15485/1440544) for additional information. Data released by Los Alamos National Lab for public use under LA-UR-18-23656.
Kinetics of the head-neck complex in low-speed rear impact.
Stemper, Brian D; Yoganandan, Naryan; Pintar, Frank A
2003-01-01
A comprehensive characterization of the biomechanics of the cervical spine in rear impact will lead to an understanding of the mechanisms of whiplash injury. Cervical kinematics have been experimentally described using human volunteers, full-body cadaver specimens, and isolated and intact head-neck specimens. However, forces and moments at the cervico-thoracic junction have not been clearly delineated. An experimental investigation was performed using ten intact head-neck complexes to delineate the loading at the base of the cervical spine and angular acceleration of the head in whiplash. A pendulum-minisled apparatus was used to simulate whiplash acceleration of the thorax at four impact severities. Lower neck loads were measured using a six-axis load cell attached between the minisled and head-neck specimens, and head angular motion was measured with an angular rate sensor attached to the lateral side of the head. Shear and axial force, extension moment, and head angular acceleration increased with impact severity. Shear force was significantly larger than axial force (p < 0.0001). Shear force reached its maximum value at 46 msec. Maximum extension moment occurred between 7 and 22 msec after maximum shear force. Maximum angular acceleration of the head occurred 2 to 18 msec later. Maximum axial force occurred last (106 msec). All four kinetic components reached maximum values during cervical S-curvature, with maximum shear force and extension moment occurring before the attainment of maximum S-curvature. Results of the present investigation indicate that shear force and extension moment at the cervico-thoracic junction drive the non-physiologic cervical S-curvature responsible for whiplash injury and underscore the importance of understanding cervical kinematics and the underlying kinetics.
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Christopher, Heike; Kovalchuk, Evgeny V; Wenzel, Hans; Bugge, Frank; Weyers, Markus; Wicht, Andreas; Peters, Achim; Tränkle, Günther
2017-07-01
We present a compact, mode-locked diode laser system designed to emit a frequency comb in the wavelength range around 780 nm. We compare the mode-locking performance of symmetric and asymmetric double quantum well ridge-waveguide diode laser chips in an extended-cavity diode laser configuration. By reverse biasing a short section of the diode laser chip, passive mode-locking at 3.4 GHz is achieved. Employing an asymmetric double quantum well allows for generation of a mode-locked optical spectrum spanning more than 15 nm (full width at -20 dB) while the symmetric double quantum well device only provides a bandwidth of ∼2.7 nm (full width at -20 dB). Analysis of the RF noise characteristics of the pulse repetition rate shows an RF linewidth of about 7 kHz (full width at half-maximum) and of at most 530 Hz (full width at half-maximum) for the asymmetric and symmetric double quantum well devices, respectively. Investigation of the frequency noise power spectral density at the pulse repetition rate shows a white noise floor of approximately 2100 Hz 2 /Hz and of at most 170 Hz 2 /Hz for the diode laser employing the asymmetric and symmetric double quantum well structures, respectively. The pulse width is less than 10 ps for both devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
NASA Astrophysics Data System (ADS)
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
Design of experiments with four-factors for a PEM fuel cell optimization
NASA Astrophysics Data System (ADS)
Olteanu, V.; Pǎtularu, L.; Popescu, C. L.; Popescu, M. O.; Crǎciunescu, A.
2017-07-01
Nowadays, many research efforts are allocated for the development of fuel cells, since they constitute a carbon-free electrical energy generator which can be used for stationary, mobile and portable applications. The maximum value of the delivered power of a fuel cell depends on many factors as: the height of plates' channels, the stoichiometry level of the air flow, the air pressure for the cathode, and of the actual operating electric current density. In this paper, two levels, full four-factors factorial experiment has been designed in order to obtain the appropriate response surface which approximates the maximum delivered power dependence of the above-mentioned factors. The optimum set of the fuel-cell factors which determine the maximum value of the delivered power was determined and a comparison between simulated and measured optimal Power versus Current Density characteristics is given.
Hydrogeologic Modeling for Monitoring, Reporting and Verification of Geologic Sequestration
NASA Astrophysics Data System (ADS)
Kolian, M.; De Figueiredo, M.; Lisa, B.
2011-12-01
In December 2010, EPA finalized Subpart RR of the Greenhouse Gas (GHG) Reporting Program, which requires facilities that conduct geologic sequestration (GS) of carbon dioxide (CO2) to report GHG data to EPA annually. The GHG Reporting Program requires reporting of GHGs and other relevant information from certain source categories in the United States, and information obtained through Subpart RR will inform Agency decisions under the Clean Air Act related to the use of carbon dioxide capture and sequestration for mitigating GHGs. This paper examines hydrogeologic modeling necessities and opportunities in the context of Subpart RR. Under Subpart RR, facilities that conduct GS by injecting CO2 for long-term containment in subsurface geologic formations are required to develop and implement an EPA-approved site-specific monitoring, reporting, and verification (MRV) plan; and report basic information on CO2 received for injection, annual monitoring activities and the amount of CO2 geologically sequestered using a mass balance approach. The major components of the MRV plan include: identification of potential surface leakage pathways for CO2 and the likelihood, magnitude, and timing, of surface leakage of CO2 through these pathways; delineation of the monitoring areas; strategy for detecting and quantifying any surface leakage of CO2; and the strategy for establishing the expected baselines for monitoring CO2 surface leakage. Hydrogeologic modeling is an integral aspect of the design of an MRV plan. In order to prepare an adequate monitoring program that addresses site specific risks over the full life of the project the MRV plan must reflect the full spatial extent of the free phase CO2 over time. Facilities delineate the maximum area that the CO2 plume is predicted to cover and how monitoring can be phased in over this area. The Maximum Monitoring Area (MMA) includes the extent of the free phase CO2 plume over the lifetime of the project plus a buffer zone of one-half mile. The Active Monitoring Area (AMA) is the area that will be monitored over a specified time interval chosen by the reporter, which must be greater than one year. All of the area in the MMA will eventually be covered by one or more AMAs. This allows operators to phase in monitoring so that during any given time interval, only that part of the MMA in which surface leakage might occur needs to be monitored. EPA designed the MRV plan approach to be site-specific, flexible, and adaptive to future technology developments. This approach allows the reporter to leverage the site characterization, modeling, and monitoring approaches (e.g. monitoring of injection pressures, injection well integrity, groundwater quality and geochemistry, and CO2 plume location, etc.) developed for their Underground Injection Control (UIC) permit. UIC requirements provide the foundation for the safe sequestration of CO2 by helping to ensure that injected fluids remain isolated in the subsurface and away from underground sources of drinking water, thereby serving to reduce the risk of CO2 leakage to the atmosphere.
A bi-prism interferometer for hard x-ray photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isakovic, A.F.; Siddons, D.; Stein, A.
2010-04-06
Micro-fabricated bi-prisms have been used to create an interference pattern from an incident hard X-ray beam, and the intensity of the pattern probed with fluorescence from a 30 nm-thick metal film. Maximum fringe visibility exceeded 0.9 owing to the nano-sized probe and the choice of single-crystal prism material. A full near-field analysis is necessary to describe the fringe field intensities, and the transverse coherence lengths were extracted at APS beamline 8-ID-I. It is also shown that the maximum number of fringes is dependent only on the complex refractive index of the prism material.
1981-01-01
quadrangle. a. Drainage Area 0.99 square mile b. Discharge at Dam Site ( cfs ) Maximum known flood at dam site Unknown Outlet conduit at maximum pool...located near the left abutment. The capacity of the spillway was determined to be 35 cfs , based on the available 1.1-foot freeboard relative to the lov...peak flows of 3014 and 1507 cfs for full and 50 percent of PMF, respectively. Computer input and summary of computer output are also included in
Higashiguchi, Takeshi; Hamada, Masaya; Kubodera, Shoichi
2007-03-01
A regenerative tin liquid microjet target was developed for a high average power extreme ultraviolet (EUV) source. The diameter of the target was smaller than 160 microm and good vacuum lower than 0.5 Pa was maintained during the operation. A maximum EUV conversion efficiency of 1.8% at the Nd:yttrium-aluminum-garnet laser intensity of around 2 x 10(11) Wcm(2) with a spot diameter of 175 microm (full width at half maximum) was observed. The angular distribution of the EUV emission remained almost isotropic, whereas suprathermal ions mainly emerged toward the target normal.
NASA Astrophysics Data System (ADS)
Higashiguchi, Takeshi; Hamada, Masaya; Kubodera, Shoichi
2007-03-01
A regenerative tin liquid microjet target was developed for a high average power extreme ultraviolet (EUV) source. The diameter of the target was smaller than 160 μm and good vacuum lower than 0.5 Pa was maintained during the operation. A maximum EUV conversion efficiency of 1.8% at the Nd:yttrium-aluminum-garnet laser intensity of around 2×1011 W/cm2 with a spot diameter of 175 μm (full width at half maximum) was observed. The angular distribution of the EUV emission remained almost isotropic, whereas suprathermal ions mainly emerged toward the target normal.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Weighing conservation objectives: maximum expected coverage versus endangered species protection
Jeffrey L. Arthur; Jeffrey D. Camm; Robert G. Haight; Claire A. Montgomery; Stephen Polasky
2004-01-01
Decision makers involved in land acquisition and protection often have multiple conservation objectives and are uncertain about the occurrence of species or other features in candidate sites. Model informing decisions on selection of sites for reserves need to provide information about cost-efficient trade-offs between objectives and account for incidence uncertainty...
Do advertisements help in the appointment of a new partner?
Higson, N
1985-01-01
Seventy five advertisements placed in five consecutive issues of the BMJ by general practices for vacancies for doctors were analysed for the amount of information given. Fifteen pieces of information were sought and scored. The maximum score was 20, and one advertisement had no indication of how to reply. PMID:3917326
Skills and Knowledge Needed to Serve as Mobile Technology Consultants for Information Organizations
ERIC Educational Resources Information Center
Potnis, Devendra; Regenstreif-Harms, Reynard; Deosthali, Kanchan; Cortez, Ed; Allard, Suzie
2016-01-01
Libraries often lack the in-house information technology (IT) expertise required to (1) implement mobile applications and related technologies (MAT); (2) attain maximum return on investment including patron satisfaction for using MAT; and (3) reduce reliance on expensive IT consultants. Based on secondary analysis of the experiences and advice…
40 CFR 370.42 - What is Tier II inventory information?
Code of Federal Regulations, 2010 CFR
2010-07-01
... phone number where such emergency information will be available 24 hours a day, every day. (h) An... the hazardous chemical present at your facility on any single day during the preceding calendar year... range codes are in § 370.43. (7) The maximum number of days that the hazardous chemical was present at...
Marketing and Distributive Education. General Retail Merchandising Curriculum Guide.
ERIC Educational Resources Information Center
Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.
This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2011 CFR
2011-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2011 CFR
2011-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2012 CFR
2012-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2953 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2010 CFR
2010-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2012 CFR
2012-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
40 CFR 60.2195 - What information must I submit prior to initial startup?
Code of Federal Regulations, 2010 CFR
2010-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
77 FR 32884 - Airworthiness Directives; Eurocopter Deutschland GMBH Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-04
... than the engine fuel flow demand needed to achieve the OEI rating at high altitude. They state that... above 10,000 feet. This condition could result in high altitude operations when full OEI engine power is... installing a placard that corresponds to the maximum permissible flight altitude, amending the Rotorcraft...
30 CFR 75.825 - Power centers.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., and be designed and installed as follows: (1) Rated for the maximum phase-to-phase voltage of the circuit; (2) Rated for the full-load current of the circuit that is supplied power through the device. (3... current of the circuit or causes the current to be interrupted automatically before the disconnecting...
43 CFR 418.38 - Maximum allowable diversion.
Code of Federal Regulations, 2010 CFR
2010-10-01
... water right holder the full water entitlement for irrigable eligible acres and includes distribution... deliveries at farm headgates have been approximately 90 percent of entitlements. This practice is expected to... efficiency target for the examples shown in the Newlands Project Water Budget table would be: 285,243 AF and...
43 CFR 418.38 - Maximum allowable diversion.
Code of Federal Regulations, 2011 CFR
2011-10-01
... water right holder the full water entitlement for irrigable eligible acres and includes distribution... deliveries at farm headgates have been approximately 90 percent of entitlements. This practice is expected to... efficiency target for the examples shown in the Newlands Project Water Budget table would be: 285,243 AF and...
Oxygen requirements of separated hybrid catfish female Ictalurus punctatus male I. furcatus eggs
USDA-ARS?s Scientific Manuscript database
Channel catfish Ictalurus punctatus egg masses require ambient water with over 95% air saturation to maintain maximum oxygen consumption as they near hatch. Since hybrid catfish eggs (channel catfish ' X blue catfish I. furcatus ') are often kept separated after fertilization by the addition of full...
Electrical resistance determination of actual contact area of cold welded metal joints
NASA Technical Reports Server (NTRS)
Hordon, M. J.
1970-01-01
Method measures the area of the bonded zone of a compression weld by observing the electrical resistance of the weld zone while the load changes from full compression until the joint ruptures under tension. The ratio of bonding force to maximum tensile load varies considerably.
45 CFR 2551.25 - What are a sponsor's administrative responsibilities?
Code of Federal Regulations, 2014 CFR
2014-10-01
... project and carry out its project management responsibilities. (c) Employ a full-time project director to... the sponsor organization and/or project service area. (f) Establish risk management policies and... responsibility for securing maximum and continuing community financial and in-kind support to operate the project...
45 CFR 2551.25 - What are a sponsor's administrative responsibilities?
Code of Federal Regulations, 2012 CFR
2012-10-01
... project and carry out its project management responsibilities. (c) Employ a full-time project director to... the sponsor organization and/or project service area. (f) Establish risk management policies and... responsibility for securing maximum and continuing community financial and in-kind support to operate the project...
45 CFR 2551.25 - What are a sponsor's administrative responsibilities?
Code of Federal Regulations, 2013 CFR
2013-10-01
... project and carry out its project management responsibilities. (c) Employ a full-time project director to... the sponsor organization and/or project service area. (f) Establish risk management policies and... responsibility for securing maximum and continuing community financial and in-kind support to operate the project...
45 CFR 2551.25 - What are a sponsor's administrative responsibilities?
Code of Federal Regulations, 2011 CFR
2011-10-01
... project and carry out its project management responsibilities. (c) Employ a full-time project director to... the sponsor organization and/or project service area. (f) Establish risk management policies and... responsibility for securing maximum and continuing community financial and in-kind support to operate the project...
Carbonaceous Aerosols Emitted from Light-Duty Vehicles Operating on Ethanol Fuel Blends
Air pollution is among the many environmental and public health concerns associated with increased ethanol use in vehicles. Jacobson [2007] showed for the U.S. market that full conversion to e85 ([85% ethanol, 15% gasoline]—the maximum standard blend used in modern dual fuel veh...
Preschool Teachers' Exposure to Classroom Noise
ERIC Educational Resources Information Center
Grebennikov, Leonid
2006-01-01
This research examined exposure to classroom noise of 25 full-time teaching staff in 14 preschool settings located across Western Sydney. The results indicated that one teacher exceeded the maximum permissible level of daily noise exposure for employees under the health and safety legislation. Three staff approached this level and 92% of teachers…
High-resolution spectrometrometry/interferometer
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Norton, R. H.; Schindler, R. A.
1980-01-01
Modified double-pass interferometer has several features that maximize its resolution. Proposed for rocket-borne probes of upper atmosphere, it includes cat's-eye retroreflectors in both arms, wedge-shaped beam splitter, and wedged optical-path compensator. Advantages are full tilt compensation, minimal spectrum "channeling," easy tunability, maximum fringe contrast, and even two-sided interferograms.
ARSENIC REMOVAL FROM DRINKING WATER BY ACTIVATED ALUMINA AND ANION EXCHANGE TREATMENT
In preparation of the U.S. Environmental Protection Agency (USEPA) revising the arsenic maximum contaminant level (MCL) in the year 2001, a project was initiated to evaluate the performance of nine, full-scale drinking water treatment plants for arsenic removal. Four of these sy...
Feasibility of Helicopter Support Seek Frost.
1980-05-01
the allowable maximum weight can be used as the payload. The payload is a variable. Small helicopters with full fuel and auxillary tanks can fly...equipment, that the program to obtain icing approval on the S-76 will be finalized for management evaluation, and a decision can be made at that time to
ERIC Educational Resources Information Center
DeMars, Christine E.
2012-01-01
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.
2016-01-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323
Maximum power point tracker for photovoltaic power plants
NASA Astrophysics Data System (ADS)
Arcidiacono, V.; Corsi, S.; Lambri, L.
The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.
20 CFR 653.503 - Field checks.
Code of Federal Regulations, 2014 CFR
2014-04-01
... to appropriate enforcement agencies in writing. (b) State agencies, to the maximum extent possible... enforcement agencies. State agencies shall report difficulties in making such formal or informal arrangements...
20 CFR 653.503 - Field checks.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to appropriate enforcement agencies in writing. (b) State agencies, to the maximum extent possible... enforcement agencies. State agencies shall report difficulties in making such formal or informal arrangements...
20 CFR 653.503 - Field checks.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to appropriate enforcement agencies in writing. (b) State agencies, to the maximum extent possible... enforcement agencies. State agencies shall report difficulties in making such formal or informal arrangements...
20 CFR 653.503 - Field checks.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to appropriate enforcement agencies in writing. (b) State agencies, to the maximum extent possible... enforcement agencies. State agencies shall report difficulties in making such formal or informal arrangements...
48 CFR 42.1106 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... that information essential to Government needs and shall take maximum advantage of data output... contractor's report-preparation system or by individual review of each report. (c) The contract...
Dry Cleaning Facilities: National Perchloroethylene Air Emission Standards
Learn about the Maximum Achievable Control Technology (MACT) standards for dry cleaning facilities. Find the rule history information, federal register citations, legal authority, and additional resources.
Development of a full-text information retrieval system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keizo Oyama; AKira Miyazawa, Atsuhiro Takasu; Kouji Shibano
The authors have executed a project to realize a full-text information retrieval system. The system is designed to deal with a document database comprising full text of a large number of documents such as academic papers. The document structures are utilized in searching and extracting appropriate information. The concept of structure handling and the configuration of the system are described in this paper.
A revolution in Distributed Virtual Globes creation with e-CORCE space program
NASA Astrophysics Data System (ADS)
Antikidis, Jean-Pierre
2010-05-01
Space applications are to-day participating to our everyday life on a continuous fashion and most of the time in an invisible way. Meteorology, telecom and more recently GPS driven applications are these days fully participating to our modern and comfortable way of life. Therefore a new revolution is underway by which Space Remote Sensing technology will bring the full of the Earth available in a digital form. Present requirements for digital Earth creation at high resolution requirement are pushing space technology to a new technological frontier that could be called the: 1 day to one week, 1 meter, 1 Earth, challenge.The e-CORCE vision (e-Constellation d'Observation Recurrente Cellulaire) relies on a complete new avenue to create a full virtual earth with the help of small satellite constellation and make them operated as sensors connected to a powerful internet based ground network. To handle this incredibly high quantity of information (10 000 Billions metric pixel ), maximum use of psycho-visual compression associated to over-simplified platforms considered as space IP nodes and a massive World-wide Grid-based system composed of more than 40 receiving and processing nodes is contemplated. The presentation will introduce the technological hurdles and the way modern upcoming cyber-infrastructure technologies called WAG (Wide Area Grid) may open a practical and economically sound solution to this never attempted challenge.
Lovell, A. E.; Nunes, F. M.; Thompson, I. J.
2017-03-10
While diproton emission was first theorized in 1960 and first measured in 2002, it was first observed only in 2012. The measurement of 14Be in coincidence with two neutrons suggests that 16Be does decay through the simultaneous emission of two strongly correlated neutrons. In this study, we construct a full three-body model of 16Be (as 14Be + n + n) in order to investigate its configuration in the continuum and, in particular, the structure of its ground state. Here, in order to describe the three-body system, effective n – 14Be potentials were constructed, constrained by the experimental information on 15Be.more » The hyperspherical R-matrix method was used to solve the three-body scattering problem, and the resonance energy of 16Be was extracted from a phase-shift analysis. As a result, in order to reproduce the experimental resonance energy of 16Be within this three-body model, a three-body interaction was needed. For extracting the width of the ground state of 16Be, we use the full width at half maximum of the derivative of the three-body eigenphase shifts and the width of the three-body elastic scattering cross section. In conclusion, our results confirm a dineutron structure for 16Be, dependent on the internal structure of the subsystem 15Be.« less
Zalesny, Jill A; Zalesny, Ronald S
2009-07-01
There is a need for information about the response of Populus genotypes to repeated application of high-salinity water and nutrient sources throughout an entire rotation. We have combined establishment biomass and uptake data with mid- and full-rotation growth data to project potential chloride (Cl-) and sodium (Na+) uptake for 2- to 11-year-old Populus in the north central United States. Our objectives were to identify potential levels of uptake as the trees developed and stages of plantation development that are conducive to variable application rates of high-salinity irrigation. The projected cumulative uptake of Cl- and Na+ during mid-rotation plantation development was stable 2 to 3 years after planting but increased steadily from year 3 to 6. Year six cumulative uptake ranged from 22 to 175 kg Cl- ha(-1) and 8 to 74 kg Na+ ha(-1), while annual uptake ranged from 8 to 54 kg Cl- ha(-1) yr(-1) and 3 to 23 kg Na+ ha(-1) yr(-1). Full-rotation uptake was greatest from 4 to 9 years (Cl-) and 4 to 8 years (Na+), with maximum levels of Cl- (32 kg ha(-1) yr(-1)) and Na+ (13 kg ha(-1) yr(-1)) occurring in year six. The relative uptake potential of Cl- and Na+ at peak accumulation (year six) was 2.7 times greater than at the end of the rotation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovell, A. E.; Nunes, F. M.; Thompson, I. J.
While diproton emission was first theorized in 1960 and first measured in 2002, it was first observed only in 2012. The measurement of 14Be in coincidence with two neutrons suggests that 16Be does decay through the simultaneous emission of two strongly correlated neutrons. In this study, we construct a full three-body model of 16Be (as 14Be + n + n) in order to investigate its configuration in the continuum and, in particular, the structure of its ground state. Here, in order to describe the three-body system, effective n – 14Be potentials were constructed, constrained by the experimental information on 15Be.more » The hyperspherical R-matrix method was used to solve the three-body scattering problem, and the resonance energy of 16Be was extracted from a phase-shift analysis. As a result, in order to reproduce the experimental resonance energy of 16Be within this three-body model, a three-body interaction was needed. For extracting the width of the ground state of 16Be, we use the full width at half maximum of the derivative of the three-body eigenphase shifts and the width of the three-body elastic scattering cross section. In conclusion, our results confirm a dineutron structure for 16Be, dependent on the internal structure of the subsystem 15Be.« less
Yuan, Fu-song; Sun, Yu-chun; Xie, Xiao-yan; Wang, Yong; Lv, Pei-jun
2013-12-18
To quantitatively evaluate the artifacts appearance of eight kinds of common dental restorative materials, such as zirconia. For the full-crown tooth preparation of mandibular first molar, eight kinds of full-crowns, such as zirconia all-ceramic crown, glass ceramic crown, ceramage crown, Au-Pt based porcelain-fused-metal (PFM) crown, Pure Titanium PFM crown, Co-Cr PFM crown, Ni-Cr PFM crown, and Au-Pd metal crown were fabricated. And natural teeth in vitro were used as controls. These full-crown and natural teeth in vitro were mounted an ultraviolet-curable resin fixed plate. High resolution cone beam computed tomography (CBCT) was used to scan all of the crowns and natural teeth in vitro, and their DICOM data were imported into software MIMICS 10.0. Then, the number of stripes and the maximum diameters of artifacts around the full-crowns were evaluated quantitatively in two-dimensional tomography images. In the two-dimensional tomography images,the artifacts did not appear around the natural teeth in vitro, glass ceramic crown, and ceramage crown. But thr artifacts appeared around the zirconia all-ceramic and metal crown. The number of stripes of artifacts was five to nine per one crown. The maximum diameters of the artifacts were 2.4 to 2.6 cm and 2.2 to 2.7 cm. In the two-dimensional tomography images of CBCT, stripe-like and radical artifacts were caused around the zirconia all-ceramic crown and metal based porcelain-fused-metal crowns. These artifacts could lower the imaging quality of the full crown shape greatly. The artifact was not caused around the natural teeth in vitro, glass ceramic crown, and ceramage crown.
Full-thickness tears of the supraspinatus tendon: A three-dimensional finite element analysis.
Quental, C; Folgado, J; Monteiro, J; Sarmento, M
2016-12-08
Knowledge regarding the likelihood of propagation of supraspinatus tears is important to allow an early identification of patients for whom a conservative treatment is more likely to fail, and consequently, to improve their clinical outcome. The aim of this study was to investigate the potential for propagation of posterior, central, and anterior full-thickness tears of different sizes using the finite element method. A three-dimensional finite element model of the supraspinatus tendon was generated from the Visible Human Project data. The mechanical behaviour of the tendon was fitted from experimental data using a transversely isotropic hyperelastic constitutive model. The full-thickness tears were simulated at the supraspinatus tendon insertion by decreasing the interface area. Tear sizes from 10% to 90%, in 10% increments, of the anteroposterior length of the supraspinatus footprint were considered in the posterior, central, and anterior regions of the tendon. For each tear, three finite element analyses were performed for a supraspinatus force of 100N, 200N, and 400N. Considering a correlation between tendon strain and the risk of tear propagation, the simulated tears were compared qualitatively and quantitatively by evaluating the volume of tendon for which a maximum strain criterion was not satisfied. The finite element analyses showed a significant impact of tear size and location not only on the magnitude, but also on the patterns of the maximum principal strains. The mechanical outcome of the anterior full-thickness tears was consistently, and significantly, more severe than that of the central or posterior full-thickness tears, which suggests that the anterior tears are at greater risk of propagating than the central or posterior tears. Copyright © 2016 Elsevier Ltd. All rights reserved.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Lossless Brownian Information Engine
NASA Astrophysics Data System (ADS)
Paneru, Govind; Lee, Dong Yun; Tlusty, Tsvi; Pak, Hyuk Kyu
2018-01-01
We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.
Lossless Brownian Information Engine.
Paneru, Govind; Lee, Dong Yun; Tlusty, Tsvi; Pak, Hyuk Kyu
2018-01-12
We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.
Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data
2007-04-01
10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2
Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23
NASA Technical Reports Server (NTRS)
Willson, Robert M.; Hathaway, David H.
2004-01-01
On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).
Gamma-ray spectroscopy and pulse shape discrimination with a plastic scintillator
NASA Astrophysics Data System (ADS)
van Loef, E.; Markosyan, G.; Shirwadkar, U.; McClish, M.; Shah, K.
2015-07-01
The scintillation properties of a novel plastic scintillator loaded with an organolead compound are presented. Under X-ray and gamma-ray excitation, emission is observed peaking at 435 nm. The scintillation light output is 9000 ph/MeV. An energy resolution (full width at half maximum over the peak position) of about 16% was observed for the 662 keV full absorption peak. Excellent pulse shape discrimination between neutrons and gamma-rays with a Figure of Merit of 2.6 at 1 MeVee was observed.
Dogan, Soner; Kastelein, Johannes Jacob Pieter; Grobbee, Diederick Egbertus; Bots, Michiel Leonardus
2011-01-01
Carotid intima-media thickness (CIMT) measurements are used as a disease outcome in randomized controlled trials that assess the effects of lipid-modifying treatment. It is unclear whether common CIMT or mean maximum CIMT should be used as the primary outcome. We directly compared both measurements using aspects that are of great importance in deciding which is most favorable for use in clinical trials. A literature search was performed (PUBMED, up to March 31, 2008). Fifteen trials with lipid-modifying treatment were identified that had information on both outcome measures. Common CIMT and mean maximum CIMT were compared on reproducibility, strength of relation with LDL and HDL cholesterol and congruency of their results (harm/neutral/beneficial) with data from event trials. Findings showed that the reported reproducibility was high for both measurements, although a direct comparison was not possible. The relationship between the achieved LDL-C and HDL-C levels with CIMT progression was modest and showed no difference in magnitude between CIMT measurements. CIMT progression rates differed across carotid segments with the highest progression rates observed in the bifurcation segment. Treatment effects differed across carotid segments without a clear preference pattern. Trials using mean maximum CIMT progression more often (12 out of 15 studies) paralleled the findings of event trials in contrast to the mean common CIMT (11 out of 15 studies), a difference not reaching statistical significance. Based on the literature, with equal results for reproducibility (assumed), lipid relationship and congruency with event findings, but with treatment effects that differ across carotid segments that can not be predicted, the mean maximum CIMT as the primary outcome may be preferred in trials on the impact of lipid-modifying interventions. One advantage is that information on mean common CIMT can generally be obtained easily in protocols assessing mean maximum CIMT, but not the other way around.
$250 million and the maximum grant funding is 50% of project costs. For more information, including current funding application deadlines, see the Biorefinery Assistance Program website. (Reference Public
Proportion estimation using prior cluster purities
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.
NASA Astrophysics Data System (ADS)
Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian
2018-01-01
Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
Designing an operator interface? Consider user`s `psychology`
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toffer, D.E.
The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less
Zhao, Liming; Ouyang, Qi; Chen, Dengfu; Udupa, Jayaram K; Wang, Huiqian; Zeng, Yuebin
2014-11-01
To provide an accurate surface defects inspection system and make the automation of robust image segmentation method a reality in routine production line, a general approach is presented for continuous casting slab (CC-slab) surface defects extraction and delineation. The applicability of the system is not tied to CC-slab exclusively. We combined the line array CCD (Charge-coupled Device) traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging) strategies in designing the system. Its aim is to suppress the respective imaging system's limitations. In the system, the images acquired from the two CCD sensors are carefully aligned in space and in time by maximum mutual information-based full-fledged registration schema. Subsequently, the image information is fused from these two subsystems such as the unbroken 2D information in LS-imaging and 3D depressed information in AL-imaging. Finally, on the basis of the established dual scanning imaging system the region of interest (ROI) localization by seed specification was designed, and the delineation for ROI by iterative relative fuzzy connectedness (IRFC) algorithm was utilized to get a precise inspection result. Our method takes into account the complementary advantages in the two common machine vision (MV) systems and it performs competitively with the state-of-the-art as seen from the comparison of experimental results. For the first time, a joint imaging scanning strategy is proposed for CC-slab surface defect inspection that allows a feasible way of powerful ROI delineation strategies to be applied to the MV inspection field. Multi-ROI delineation by using IRFC in this research field may further improve the results.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Streambed stresses and flow around bridge piers
Parola, A.C.; Ruhl, K.J.; Hagerty, D.J.; Brown, B.M.; Ford, D.L.; Korves, A.A.
1996-01-01
Scour of streambed material around bridge foundations by floodwaters is the leading cause of catastrophic bridge failure in the United States. The potential for scour and the stability of riprap used to protect the streambed from scour during extreme flood events must be known to evaluate the likelihood of bridge failure. A parameter used in estimating the potential for scour and removal of riprap protection is the time-averaged shear stress on the streambed often referred to as boundary stress. Bridge components, such as bridge piers and abutments, obstruct flow and induce strong vortex systems that create streambed or boundary stresses significantly higher than those in unobstructed flow. These locally high stresses can erode the streambed around pier and abutment foundations to the extent that the foundation is undermined, resulting in settlement or collapse of bridge spans. The purpose of this study was to estimate streambed stresses at a bridge pier under full-scale flow conditions and to compare these stresses with those obtained previously in small-scale model studies. Two-dimensional velocity data were collected for three flow conditions around a bridge pier at the Kentucky State Highway 417 bridge over the Green River at Greensburg in Green County, Ky. Velocity vector plots and the horizontal component of streambed stress contour plots were developed from the velocity data. The streambed stress contours were developed using both a near-bed velocity and velocity gradient method. Maximum near-bed velocities measured at the pier for the three flow conditions were 1.5, 1.6, and 2.0 times the average near-bed velocities measured in the upstream approach flow. Maximum streambed stresses for the three flow conditions were determined to be 10, 15, and 36 times the streambed stresses of the upstream approach flow. Both the near-bed velocity measurements and approximate maximum streambed stresses at the full-scale pier were consistent with those observed in experiments using small-scale models in which similar data were collected, except for a single observation of the near-bed velocity data and the corresponding streambed stress determination. The location of the maximum streambed stress was immediately downstream of a 90 degree radial of the upstream cylinder (with the center of the upstream cylinder being the origin) for the three flow conditions. This location was close to the flow wake separation point at the upstream cylinder. Other researchers have observed the maximum streambed stress around circular cylinders at this location or at a location immediately upstream of the wake separation point. Although the magnitudes of the estimated streambed stresses measured at the full-scale pier were consistent with those measured in small-scale model studies, the stress distributions were significantly different than those measured in small-scale models. The most significant discrepancies between stress contours developed in this study and those developed in the small-scale studies for flow around cylindrical piers on a flat streambed were associated with the shape of the stress contours. The extent of the high stress region of the streambed around the full-scale pier was substantially larger than the diameter of the upstream cylinder, while small-scale models had small regions compared to the diameter of the model cylinders. In addition, considerable asymmetry in the stress contours was observed. The large region of high stress and asymmetry was attributed to several factors including (1) the geometry of the full-scale pier, (2) the non-planar topography of the streambed, (3) the 20 degree skew of the pier to the approaching flow, and (4) the non-uniformity of the approach flow. The extent of effect of the pier on streambed stresses was found to be larger for the full-scale site than for model studies. The results from the model studies indicated that the streambed stresses created by the obstruction of flow by the 3-foot wide pi
ERIC Educational Resources Information Center
Sahin, Alper; Ozbasi, Durmus
2017-01-01
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... responses per Total annual Average burden Total hours respondents respondent responses per response Pretest... and maintenance costs associated with this collection of information. ERG will conduct a pretest of... complete the pretest, for a total of a maximum of 7.5 hours. We estimate that up to 135 [[Page 11654...
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... demo video showing your application in action. Post videos to video-sharing sites like YouTube. (3... maximum of 10 slides. We strongly recommend you explain how you addressed the evaluation criteria and the... Use--20% Are you able to search for information easily? Is the requested information, both text and...
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
models exploit the immediate past only. To extract information from further back in the document’s history , I use trigger pairs as the basic information...9 2.2 Context-Free Estimation (Unigram) ...... .................... 12 2.3 Short-Term History (Conventional N-gram...12 2.4 Short-term Class History (Class-Based N-gram) ................... 14 2.5 Intermediate Distance ........ ........................... 16
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-06
..., with aggregate size and last sale information, subscribers to ToM will also receive: Opening imbalance... information. \\3\\ Where there is an imbalance at the price at which the maximum number of contracts can trade... Timer or Imbalance Timer expires if material conditions of the market (imbalance size, ABBO price or...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-08
..., Operation Sail, Inc., is planning to publish information on the event in local newspapers, internet sites... areas for viewing the ``Parade of Sail'' have been established to allow for maximum use of the waterways... sponsoring organization, Operation Sail, Inc., is planning to publish information of the event in local...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.
ERIC Educational Resources Information Center
Samejima, Fumiko
The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…
30 CFR 250.1712 - What information must I submit before I permanently plug a well or zone?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Recent well test data and pressure data, if available; (c) Maximum possible surface pressure, and how it... permanently plug a well or zone? 250.1712 Section 250.1712 Mineral Resources MINERALS MANAGEMENT SERVICE... Decommissioning Activities Permanently Plugging Wells § 250.1712 What information must I submit before I...
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.