ERIC Educational Resources Information Center
McCaffrey, Daniel F.
2012-01-01
Value-added models have caught the interest of policymakers because, unlike using student tests scores for other means of accountability, they purport to "level the playing field." That is, they supposedly reflect only a teacher's effectiveness, not whether she teaches high- or low-income students, for instance, or students in accelerated or…
ERIC Educational Resources Information Center
Harris, Douglas N.; Anderson, Andrew
2013-01-01
There is a growing body of research on the validity and reliability of value-added measures, but most of this research has focused on elementary grades. Driven by several federal initiatives such as Race to the Top, Teacher Incentive Fund, and ESEA waivers, however, many states have incorporated value-added measures into the evaluations not only…
ERIC Educational Resources Information Center
Loeb, Susanna
2013-01-01
The question for this brief is whether education leaders can use value-added measures as tools for improving schooling and, if so, how to do this. Districts, states, and schools can, at least in theory, generate gains in educational outcomes for students using value-added measures in three ways: creating information on effective programs, making…
Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2
ERIC Educational Resources Information Center
Croft, Michelle; Buddin, Richard
2014-01-01
As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…
ERIC Educational Resources Information Center
Raudenbush, Stephen
2013-01-01
This brief considers the problem of using value-added scores to compare teachers who work in different schools. The author focuses on whether such comparisons can be regarded as fair, or, in statistical language, "unbiased." An unbiased measure does not systematically favor teachers because of the backgrounds of the students they are…
ERIC Educational Resources Information Center
Harris, Douglas N.
2012-01-01
In the recent drive to revamp teacher evaluation and accountability, measures of a teacher's value added have played the starring role. But the star of the show is not always the best actor, nor can the star succeed without a strong supporting cast. In assessing teacher performance, observations of classroom practice, portfolios of teachers' work,…
ERIC Educational Resources Information Center
Goldhaber, Dan; Theobald, Roddy
2012-01-01
There are good reasons for re-thinking teacher evaluation. Evaluation systems in most school districts appear to be far from rigorous. A recent study showed that more than 99 percent of teachers in a number of districts were rated "satisfactory," which does not comport with empirical evidence that teachers differ substantially from each…
ERIC Educational Resources Information Center
Goldhaber, Dan
2013-01-01
Teacher training programs are increasingly being held under the microscope. Perhaps the most notable of recent calls to reform was the 2009 declaration by U.S. Education Secretary Arne Duncan that "by almost any standard, many if not most of the nation's 1,450 schools, colleges, and departments of education are doing a mediocre job of…
ERIC Educational Resources Information Center
Corcoran, Sean P.
2010-01-01
Value-added measures of teacher effectiveness are the centerpiece of a national movement to evaluate, promote, compensate, and dismiss teachers based in part on their students' test results. Federal, state, and local policy-makers have adopted these methods en masse in recent years in an attempt to objectively quantify teaching effectiveness and…
Dollar$ & $en$e. Part V: What is your added value?
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts--the information world equivalent of genes. The goal of this series of articles is to infect you with memes, so that you will assimilate, translate, and express them. No matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. In the previous papers in this series, I showed that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge that can be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I introduced a set of memes for measuring the cost of adding value (2). In Part III of this series, I presented a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I discussed practical knowledge management tools for measuring the value of people, structural, and customer capital (4). In Part V of this series, I will apply intellectual capital and knowledge management concepts at the individual level, to help answer a fundamental question: What is my added value?
NASA Astrophysics Data System (ADS)
Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri
2018-03-01
It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.
Heart rate time series characteristics for early detection of infections in critically ill patients.
Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G
2017-04-01
It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.
Dollar$ & $en$e. Part VI: Knowledge management: the state of the art.
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts--the information world equivalent of genes. The goal of this series of articles is to infect you with memes so you will assimilate, translate, and express them. No matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. In the previous articles in this series, I showed that when we convert raw data into wisdom, we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge that then can be applied to create the ultimate product in the value chain--wisdom. In part II of this series, I introduced a set of memes for measuring the cost of adding value (2). In part III of this series, I presented a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In part IV of this series, I discussed practical knowledge management tools for measuring the value of people, structural, and customer capital (4). In part V of this series, I applied intellectual capital and knowledge management concepts at the individual level, to help answer a fundamental question: what is my added value (5)? In the final part of this series, I will review the state of intellectual capital and knowledge management development to date and outline the direction of current knowledge management initiatives and research projects.
Assessing the relationship between ad volume and awareness of a tobacco education media campaign
Modayil, Mary V; Stevens, Colleen
2010-01-01
Background The relation between aided ad recall and level of television ad placement in a public health setting is not well established. We examine this association by looking back at 8 years of the California's Tobacco Control Program's (CTCP) media campaign. Methods Starting in July 2001, California's campaign was continuously monitored using five telephone series of surveys and six web-based series of surveys immediately following a media flight. We used population-based statewide surveys to measure aided recall for advertisements that were placed in each of these media flights. Targeted rating points (TRPs) were used to measure ad placement intensity throughout the state. Results Cumulative TRPs exhibited a stronger relation with aided ad recall than flight TRPs or TRP density. This association increased after log-transforming cumulative TRP values. We found that a one-unit increase in log-cumulative TRPs led to a 13.6% increase in aided ad recall using web-based survey data, compared to a 5.3% increase in aided ad recall using telephone survey data. Conclusions In California, the relation between aided ad recall and cumulative TRPs showed a diminishing return after a large volume of ad placements These findings may be useful in planning future ad placement for CTCP's media campaign. PMID:20382649
Dispersion entropy for the analysis of resting-state MEG regularity in Alzheimer's disease.
Azami, Hamed; Rostaghi, Mostafa; Fernandez, Alberto; Escudero, Javier
2016-08-01
Alzheimer's disease (AD) is a progressive degenerative brain disorder affecting memory, thinking, behaviour and emotion. It is the most common form of dementia and a big social problem in western societies. The analysis of brain activity may help to diagnose this disease. Changes in entropy methods have been reported useful in research studies to characterize AD. We have recently proposed dispersion entropy (DisEn) as a very fast and powerful tool to quantify the irregularity of time series. The aim of this paper is to evaluate the ability of DisEn, in comparison with fuzzy entropy (FuzEn), sample entropy (SampEn), and permutation entropy (PerEn), to discriminate 36 AD patients from 26 elderly control subjects using resting-state magnetoencephalogram (MEG) signals. The results obtained by DisEn, FuzEn, and SampEn, unlike PerEn, show that the AD patients' signals are more regular than controls' time series. The p-values obtained by DisEn, FuzEn, SampEn, and PerEn based methods demonstrate the superiority of DisEn over PerEn, SampEn, and PerEn. Moreover, the computation time for the newly proposed DisEn-based method is noticeably less than for the FuzEn, SampEn, and PerEn based approaches.
Dollar$ & $en$e. Part IV: Measuring the value of people, structural, and customer capital.
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts, the information world equivalent of genes. The goal of this series of articles is to infect you with my memes, so that you will assimilate, translate, and express them. We discovered that no matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. We saw that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge which can then be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I infected you with a set of memes for measuring the cost of adding value (2). In Part III of this series, I infected you with a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I will infect you with memes for measuring the value of people, structural, and customer capital.
ERIC Educational Resources Information Center
Tymms, Peter
This is the fourth in a series of technical reports that have dealt with issues surrounding the possibility of national value-added systems for primary schools in England. The main focus has been on the relative progress made by students between the ends of Key Stage 1 (KS1) and Key Stage 2 (KS2). The analysis has indicated that the strength of…
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
ERIC Educational Resources Information Center
Haertel, Edward H.
2013-01-01
Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…
NASA Technical Reports Server (NTRS)
Cornford, S.; Gibbel, M.
1997-01-01
NASA's Code QT Test Effectiveness Program is funding a series of applied research activities focused on utilizing the principles of physics and engineering of failure and those of engineering economics to assess and improve the value-added by the various validation and verification activities to organizations.
ERIC Educational Resources Information Center
Thomas, Gregg; Douglass, John Aubrey
2009-01-01
Throughout the world, interest in gauging learning outcomes at all levels of education has grown considerably over the past decade. In higher education, measuring "learning outcomes" is viewed by many stakeholders as a relatively new method to judge the "value added" of colleges and universities. The potential to accurately…
NASA Astrophysics Data System (ADS)
Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.
2009-04-01
Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
A Value-Added Study of Teacher Spillover Effects across Four Core Subjects in Middle Schools
ERIC Educational Resources Information Center
Yuan, Kun
2015-01-01
This study examined the existence, magnitude, and impact of teacher spillover effects (TSEs) across teachers of four subject areas (i.e., mathematics, English language arts [ELA], science, and social studies) on student achievement in each of the four subjects at the middle school level. The author conducted a series of value-added (VA) analyses,…
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Sub- and Quasi-Centurial Cycles in Solar and Geomagnetic Activity Data Series
NASA Astrophysics Data System (ADS)
Komitov, B.; Sello, S.; Duchlev, P.; Dechev, M.; Penev, K.; Koleva, K.
2016-07-01
The subject of this paper is the existence and stability of solar cycles with durations in the range of 20-250 years. Five types of data series are used: 1) the Zurich series (1749-2009 AD), the mean annual International sunspot number Ri, 2) the Group sunspot number series Rh (1610-1995 AD), 3) the simulated extended sunspot number from Extended time series of Solar Activity Indices (ESAI) (1090-2002 AD), 4) the simulated extended geomagnetic aa-index from ESAI (1099-2002 AD), 5) the Meudon filament series (1919-1991 AD). Two principally independent methods of time series analysis are used: the T-R periodogram analysis (both in standard and ``scanning window'' regimes) and the wavelet-analysis. The obtained results are very similar. A strong cycle with a mean duration of 55-60 years is found to exist in all series. On the other hand, a strong and stable quasi 110-120 years and ˜200-year cycles are obtained in all of these series except in the Ri one. The high importance of the long term solar activity dynamics for the aims of solar dynamo modeling and predictions is especially noted.
Reviving Graduate Seminar Series through Non-Technical Presentations
ERIC Educational Resources Information Center
Madihally, Sundararajan V.
2011-01-01
Most chemical engineering programs that offer M.S. and Ph.D. degrees have a common seminar series for all the graduate students. Many would agree that seminars lack student interest, leading to ineffectiveness. We questioned the possibility of adding value to the seminar series by incorporating non-technical topics that may be more important to…
NASA Astrophysics Data System (ADS)
Wang, Jiang; Yang, Chen; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2016-10-01
In this paper, EEG series are applied to construct functional connections with the correlation between different regions in order to investigate the nonlinear characteristic and the cognitive function of the brain with Alzheimer's disease (AD). First, limited penetrable visibility graph (LPVG) and phase space method map single EEG series into networks, and investigate the underlying chaotic system dynamics of AD brain. Topological properties of the networks are extracted, such as average path length and clustering coefficient. It is found that the network topology of AD in several local brain regions are different from that of the control group with no statistically significant difference existing all over the brain. Furthermore, in order to detect the abnormality of AD brain as a whole, functional connections among different brain regions are reconstructed based on similarity of clustering coefficient sequence (CCSS) of EEG series in the four frequency bands (delta, theta, alpha, and beta), which exhibit obvious small-world properties. Graph analysis demonstrates that for both methodologies, the functional connections between regions of AD brain decrease, particularly in the alpha frequency band. AD causes the graph index complexity of the functional network decreased, the small-world properties weakened, and the vulnerability increased. The obtained results show that the brain functional network constructed by LPVG and phase space method might be more effective to distinguish AD from the normal control than the analysis of single series, which is helpful for revealing the underlying pathological mechanism of the disease.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods
ERIC Educational Resources Information Center
Yeh, Stuart S.
2012-01-01
This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…
Consumer preferences and willingness to pay for value-added chicken product attributes.
Martínez Michel, Lorelei; Anders, Sven; Wismer, Wendy V
2011-10-01
A growing demand for convenient and ready-to-eat products has increased poultry processors' interest in developing consumer-oriented value-added chicken products. In this study, a conjoint analysis survey of 276 chicken consumers in Edmonton was conducted during the summer of 2009 to assess the importance of the chicken part, production method, processing method, storage method, the presence of added flavor, and cooking method on consumer preferences for different value-added chicken product attributes. Estimates of consumer willingness to pay (WTP) premium prices for different combinations of value-added chicken attributes were also determined. Participants'"ideal" chicken product was a refrigerated product made with free-range chicken breast, produced with no additives or preservatives and no added flavor, which could be oven heated or pan heated. Half of all participants on average were willing to pay 30% more for a value-added chicken product over the price of a conventional product. Overall, young consumers, individuals who shop at Farmers' Markets and those who prefer free-range or organic products were more likely to pay a premium for value-added chicken products. As expected, consumers' WTP was affected negatively by product price. Combined knowledge of consumer product attribute preferences and consumer WTP for value-added chicken products can help the poultry industry design innovative value-added chicken products. Practical Application: An optimum combination of product attributes desired by consumers for the development of a new value-added chicken product, as well as the WTP for this product, have been identified in this study. This information is relevant to the poultry industry to enhance consumer satisfaction of future value-added chicken products and provide the tools for future profit growth. © 2011 Institute of Food Technologists®
Mehdizadeh, Sina; Sanjari, Mohammad Ali
2017-11-07
This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.
Research on the co-movement between high-end talent and economic growth: A complex network approach
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Wang, Minggang; Xu, Hua; Zhang, Wenbin; Tian, Lixin
2018-02-01
The major goal of this paper is to focus on the co-movement between high-end talent and economic growth by a complex network approach. Firstly, the national high-end talent development efficiency from 1990 to 2015 is taken as the quantitative index to measure the development of high-end talent. The added values of the primary industry, secondary industry, tertiary industry are selected as economic growth indexes, and all the selected sample data are standardized by the mean value processing method. Secondly, let seven months as the length of the sliding window, and one month as the sliding step, then the grey correlation degrees between systems are measured using the slope correlation degrees, and the grey correlation degree sequence is mapped into the symbol series composed by three symbols { Y , O , N } based on the coarse graining method. Let three characters as a mode, the nodes are obtained by the modes according to the time sequence. Let the transformation between the modal be the edge, and the times of the transformation be weight, then the co-movement networks between national high-end talent development efficiency and the added values of the primary industry, secondary industry, tertiary industry are built respectively. Finally, the dynamic characteristics of the networks are analysed by the node strength, strength distribution, weighted clustering coefficient, conversion cycle of the modes and the transition between the co-movement modes. The results indicate that there are mutual influence and promotion relations between the national high-end talent development efficiency and the added values of the primary, secondary and tertiary industry.
Reflections on the added value of using mixed methods in the SCAPE study.
Murphy, Kathy; Casey, Dympna; Devane, Declan; Meskell, Pauline; Higgins, Agnes; Elliot, Naomi; Lalor, Joan; Begley, Cecily
2014-03-01
To reflect on the added value that a mixed method design gave in a large national evaluation study of specialist and advanced practice (SCAPE), and to propose a reporting guide that could help make explicit the added value of mixed methods in other studies. Recently, researchers have focused on how to carry out mixed methods research (MMR) rigorously. The value-added claims for MMR include the capacity to exploit the strengths and compensate for weakness inherent in single designs, generate comprehensive description of phenomena, produce more convincing results for funders or policy-makers and build methodological expertise. Data illustrating value added claims were drawn from the SCAPE study. Studies about the purpose of mixed methods were identified from a search of literature. The authors explain why and how they undertook components of the study, and propose a guideline to facilitate such studies. If MMR is to become the third methodological paradigm, then articulation of what extra benefit MMR adds to a study is essential. The authors conclude that MMR has added value and found the guideline useful as a way of making value claims explicit. The clear articulation of the procedural aspects of mixed-methods research, and identification of a guideline to facilitate such research, will enable researchers to learn more effectively from each other.
Prediction possibilities of Arosa total ozone
NASA Astrophysics Data System (ADS)
Kane, R. P.
1987-01-01
Using the periodicities obtained by a Maximum Entropy Spectral Analysis (MESA) of the Arosa total ozone data ( CC') series for 1932 1971, the values predicted for 1972 onwards were compared with the observed values of the ( AD) series. A change of level was noticed, with the observed ( AD) values lower by about 7 D.U. Also, the matching was poor in 1980, 1981, 1982. In the monthly values, the most prominent periodicity was the annual wave, comprising some 80% variance. In the 12 month running averages, the annual wave was eliminated and the most prominent periodicity was T=3.7 years, encompassing roundly 20% variance. This and other periodicities at T=4.7, 5.4, 6.2, 10 and 16 years were all statistically significant at a 3.5δ a priori i.e., 2δ a posteriori level. However, the predictions from these were unsatisfactory, probably because some of these periodicities may be transient i.e., changing amplitudes and/or phases with time. Thus, no meaningful prediction seem possible for Arosa total ozone.
A New Interpretation of Augmented Subscores and Their Added Value in Terms of Parallel Forms
ERIC Educational Resources Information Center
Sinharay, Sandip
2018-01-01
The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. The method is based on the classical test theory and deems a subscore to be of added value if the subscore predicts the corresponding true subscore better than does the total score. Sinharay provided an interpretation of the added…
Evaluating Special Educator Effectiveness: Addressing Issues Inherent to Value-Added Modeling
ERIC Educational Resources Information Center
Steinbrecher, Trisha D.; Selig, James P.; Cosbey, Joanna; Thorstensen, Beata I.
2014-01-01
States are increasingly using value-added approaches to evaluate teacher effectiveness. There is much debate regarding whether these methods should be employed and, if employed, what role such methods should play in comprehensive teacher evaluation systems. In this article, we consider the use of value-added modeling (VAM) to evaluate special…
Self tuning system for industrial surveillance
Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.
2000-01-01
A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.
NASA Astrophysics Data System (ADS)
Wetter, Oliver; Pfister, Christian
2010-05-01
Beginning of grain harvest in the tri-border region Basel as a proxy for mean April-July temperatures; creation of a long Swiss series c. 1454 AD - 1950 AD O. Wetter and C. Pfister Section of Economic, Social and Environmental History, Institute of History, University of Bern, Bern, Switzerland (oliver.wetter@hist.unibe.ch) Before agricultural harvesting machines replaced manual labour the date of the grain harvest was largely dependent on mean temperatures from spring to early summer. It thus constitutes a very valuable source of information to reconstruct these temperatures. The later the harvest began, the cooler spring and early summer must have been and vice versa. For this reconstruction a new data series of grain harvests in the tri-border region Basel (representative for north-west Switzerland, the Alsace (France) and south-west Germany) was used as a temperature proxy. The harvesting dates have been extracted from the account books of the hospital of Basel which cover the period from c.1454 AD to 1705 AD. This series could be completed with several series of grain tithe dates originating from the Swiss Midland, covering the period between 1557 and 1825 and several grain harvest dates series covering the time between 1825 and 1950. Thus a series of almost 500 years could be compiled. Since the method of harvesting remained unchanged until the 1950's when manual labour was replaced by machines, the harvest dates of the modern series, lying within the temperature measurement series, could be used for calibrating the medieval dates.
Circuit analysis method for thin-film solar cell modules
NASA Technical Reports Server (NTRS)
Burger, D. R.
1985-01-01
The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Wang, Deyun; Liu, Yanling; Luo, Hongyuan; Yue, Chenqiang; Cheng, Sheng
2017-01-01
Accurate PM2.5 concentration forecasting is crucial for protecting public health and atmospheric environment. However, the intermittent and unstable nature of PM2.5 concentration series makes its forecasting become a very difficult task. In order to improve the forecast accuracy of PM2.5 concentration, this paper proposes a hybrid model based on wavelet transform (WT), variational mode decomposition (VMD) and back propagation (BP) neural network optimized by differential evolution (DE) algorithm. Firstly, WT is employed to disassemble the PM2.5 concentration series into a number of subsets with different frequencies. Secondly, VMD is applied to decompose each subset into a set of variational modes (VMs). Thirdly, DE-BP model is utilized to forecast all the VMs. Fourthly, the forecast value of each subset is obtained through aggregating the forecast results of all the VMs obtained from VMD decomposition of this subset. Finally, the final forecast series of PM2.5 concentration is obtained by adding up the forecast values of all subsets. Two PM2.5 concentration series collected from Wuhan and Tianjin, respectively, located in China are used to test the effectiveness of the proposed model. The results demonstrate that the proposed model outperforms all the other considered models in this paper. PMID:28704955
ERIC Educational Resources Information Center
Imberman, Scott; Lovenheim, Michael F.
2015-01-01
Value-added data have become an increasingly common evaluation tool for schools and teachers. Many school districts have begun to adopt these methods and have released results publicly. In this paper, we use the unique public release of value-added data in Los Angeles to identify how this measure of school quality is capitalized into housing…
Russian State Time and Earth Rotation Service: Observations, Eop Series, Prediction
NASA Astrophysics Data System (ADS)
Kaufman, M.; Pasynok, S.
2010-01-01
Russian State Time, Frequency and Earth Rotation Service provides the official EOP data and time for use in scientific, technical and metrological works in Russia. The observations of GLONASS and GPS on 30 stations in Russia, and also the Russian and worldwide observations data of VLBI (35 stations) and SLR (20 stations) are used now. To these three series of EOP the data calculated in two other Russian analysis centers are added: IAA (VLBI, GPS and SLR series) and MCC (SLR). Joint processing of these 7 series is carried out every day (the operational EOP data for the last day and the predicted values for 50 days). The EOP values are weekly refined and systematic errors of every individual series are corrected. The combined results become accessible on the VNIIFTRI server (ftp.imvp.ru) approximately at 6h UT daily.
Loops in AdS from conformal field theory
Aharony, Ofer; Alday, Luis F.; Bissi, Agnese; ...
2017-07-10
We propose and demonstrate a new use for conformal field theory (CFT) crossing equations in the context of AdS/CFT: the computation of loop amplitudes in AdS, dual to non-planar correlators in holographic CFTs. Loops in AdS are largely unexplored, mostly due to technical difficulties in direct calculations. We revisit this problem, and the dual 1=N expansion of CFTs, in two independent ways. The first is to show how to explicitly solve the crossing equations to the first subleading order in 1=N 2, given a leading order solution. This is done as a systematic expansion in inverse powers of the spin, to all orders. These expansions can be resummed, leading to the CFT data for nite values of the spin. Our second approach involves Mellin space. We show how the polar part of the four-point, loop-level Mellin amplitudes can be fully reconstructed from the leading-order data. The anomalous dimensions computed with both methods agree. In the case ofmore » $$\\phi$$ 4 theory in AdS, our crossing solution reproduces a previous computation of the one-loop bubble diagram. We can go further, deriving the four-point scalar triangle diagram in AdS, which had never been computed. In the process, we show how to analytically derive anomalous dimensions from Mellin amplitudes with an in nite series of poles, and discuss applications to more complicated cases such as the N = 4 super-Yang-Mills theory.« less
Loops in AdS from conformal field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aharony, Ofer; Alday, Luis F.; Bissi, Agnese
We propose and demonstrate a new use for conformal field theory (CFT) crossing equations in the context of AdS/CFT: the computation of loop amplitudes in AdS, dual to non-planar correlators in holographic CFTs. Loops in AdS are largely unexplored, mostly due to technical difficulties in direct calculations. We revisit this problem, and the dual 1=N expansion of CFTs, in two independent ways. The first is to show how to explicitly solve the crossing equations to the first subleading order in 1=N 2, given a leading order solution. This is done as a systematic expansion in inverse powers of the spin, to all orders. These expansions can be resummed, leading to the CFT data for nite values of the spin. Our second approach involves Mellin space. We show how the polar part of the four-point, loop-level Mellin amplitudes can be fully reconstructed from the leading-order data. The anomalous dimensions computed with both methods agree. In the case ofmore » $$\\phi$$ 4 theory in AdS, our crossing solution reproduces a previous computation of the one-loop bubble diagram. We can go further, deriving the four-point scalar triangle diagram in AdS, which had never been computed. In the process, we show how to analytically derive anomalous dimensions from Mellin amplitudes with an in nite series of poles, and discuss applications to more complicated cases such as the N = 4 super-Yang-Mills theory.« less
Loops in AdS from conformal field theory
NASA Astrophysics Data System (ADS)
Aharony, Ofer; Alday, Luis F.; Bissi, Agnese; Perlmutter, Eric
2017-07-01
We propose and demonstrate a new use for conformal field theory (CFT) crossing equations in the context of AdS/CFT: the computation of loop amplitudes in AdS, dual to non-planar correlators in holographic CFTs. Loops in AdS are largely unexplored, mostly due to technical difficulties in direct calculations. We revisit this problem, and the dual 1 /N expansion of CFTs, in two independent ways. The first is to show how to explicitly solve the crossing equations to the first subleading order in 1 /N 2, given a leading order solution. This is done as a systematic expansion in inverse powers of the spin, to all orders. These expansions can be resummed, leading to the CFT data for finite values of the spin. Our second approach involves Mellin space. We show how the polar part of the four-point, loop-level Mellin amplitudes can be fully reconstructed from the leading-order data. The anomalous dimensions computed with both methods agree. In the case of ϕ 4 theory in AdS, our crossing solution reproduces a previous computation of the one-loop bubble diagram. We can go further, deriving the four-point scalar triangle diagram in AdS, which had never been computed. In the process, we show how to analytically derive anomalous dimensions from Mellin amplitudes with an infinite series of poles, and discuss applications to more complicated cases such as the N = 4 super-Yang-Mills theory.
A comparative assessment of statistical methods for extreme weather analysis
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of this study are of relevance for a broad range of environmental variables, including meteorological and hydrological quantities.
Getting Value out of Value-Added: Report of a Workshop
ERIC Educational Resources Information Center
Braun, Henry, Ed.; Chudowsky, Naomi, Ed.; Koenig, Judith, Ed.
2010-01-01
Value-added methods refer to efforts to estimate the relative contributions of specific teachers, schools, or programs to student test performance. In recent years, these methods have attracted considerable attention because of their potential applicability for educational accountability, teacher pay-for-performance systems, school and teacher…
Value-Added Systems for Information and Instruction at Vocational-Technical Centers.
ERIC Educational Resources Information Center
Boyd, Betty Sue; Turner, Marsha K.
Information resources can be considered a series of formal processes or activities by which the potential usefulness of specific information messages being processed is enhanced. These processes may add value to the information for the user. In order to increase the possibility that the information will be useful to recipients and users,…
Analysis of Added Value of Subscores with Respect to Classification
ERIC Educational Resources Information Center
Sinharay, Sandip
2014-01-01
Brennan noted that users of test scores often want (indeed, demand) that subscores be reported, along with total test scores, for diagnostic purposes. Haberman suggested a method based on classical test theory (CTT) to determine if subscores have added value over the total score. One way to interpret the method is that a subscore has added value…
Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory
NASA Astrophysics Data System (ADS)
Wang, Na; Li, Dong; Wang, Qiwen
2012-12-01
The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.
Lepper-Blilie, A N; Berg, E P; Germolus, A J; Buchanan, D S; Berg, P T
2014-01-01
The objectives of this study were to educate consumers about value-added beef cuts and evaluate their palatability responses of a value cut and three traditional cuts. Three hundred and twenty-two individuals participated in the beef value cut education seminar series presented by trained beef industry educators. Seminar participants evaluated tenderness, juiciness, flavor, and overall like of four samples, bottom round, top sirloin, ribeye, and a value cut (Delmonico or Denver), on a 9-point scale. The ribeye and the value cut were found to be similar in all four attributes and differed from the top sirloin and bottom round. Correlations and regression analysis found that flavor was the largest influencing factor for overall like for the ribeye, value cut, and top sirloin. The value cut is comparable to the ribeye and can be a less expensive replacement. © 2013.
Sforza, Alfonso; Mancusi, Costantino; Carlino, Maria Viviana; Buonauro, Agostino; Barozzi, Marco; Romano, Giuseppe; Serra, Sossio; de Simone, Giovanni
2017-06-19
The availability of ultra-miniaturized pocket ultrasound devices (PUD) adds diagnostic power to the clinical examination. Information on accuracy of ultrasound with handheld units in immediate differential diagnosis in emergency department (ED) is poor. The aim of this study is to test the usefulness and accuracy of lung ultrasound (LUS) alone or combined with ultrasound of the heart and inferior vena cava (IVC) using a PUD for the differential diagnosis of acute dyspnea (AD). We included 68 patients presenting to the ED of "Maurizio Bufalini" Hospital in Cesena (Italy) for AD. All patients underwent integrated ultrasound examination (IUE) of lung-heart-IVC, using PUD. The series was divided into patients with dyspnea of cardiac or non-cardiac origin. We used 2 × 2 contingency tables to analyze sensitivity, specificity, positive predictive value and negative predictive value of the three ultrasonic methods and their various combinations for the diagnosis of cardiogenic dyspnea (CD), comparing with the final diagnosis made by an independent emergency physician. LUS alone exhibited a good sensitivity (92.6%) and specificity (80.5%). The highest accuracy (90%) for the diagnosis of CD was obtained with the combination of LUS and one of the other two methods (heart or IVC). The IUE with PUD is a useful extension of the clinical examination, can be readily available at the bedside or in ambulance, requires few minutes and has a reliable diagnostic discriminant ability in the setting of AD.
Ross, John; Keesbury, Jill; Hardee, Karen
2015-01-01
ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119
A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.
Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya
2007-01-01
A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.
Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper
ERIC Educational Resources Information Center
Hock, Heinrich; Isenberg, Eric
2012-01-01
Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.
2018-02-01
Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.
ERIC Educational Resources Information Center
Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas
2014-01-01
Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…
Value-Added Results for Public Virtual Schools in California
ERIC Educational Resources Information Center
Ford, Richard; Rice, Kerry
2015-01-01
The objective of this paper is to present value-added calculation methods that were applied to determine whether online schools performed at the same or different levels relative to standardized testing. This study includes information on how we approached our value added model development and the results for 32 online public high schools in…
Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions
ERIC Educational Resources Information Center
Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong
2017-01-01
Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…
NASA Astrophysics Data System (ADS)
Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki; Kawakami, Kazunori; Kikuchi, Keiichi; Miki, Hitoshi; Mochizuki, Teruhito; Ikezoe, Junpei
2001-10-01
The purpose of this study was to present an application of a novel denoising technique for improving the accuracy of cerebral blood flow (CBF) images generated from dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI). The method presented in this study was based on anisotropic diffusion (AD). The usefulness of this method was firstly investigated using computer simulations. We applied this method to patient data acquired using a 1.5 T MR system. After a bolus injection of Gd-DTPA, we obtained 40-50 dynamic images with a 1.32-2.08 s time resolution in 4-6 slices. The dynamic images were processed using the AD method, and then the CBF images were generated using pixel-by-pixel deconvolution analysis. For comparison, the CBF images were also generated with or without processing the dynamic images using a median or Gaussian filter. In simulation studies, the standard deviation of the CBF values obtained after processing by the AD method was smaller than that of the CBF values obtained without any processing, while the mean value agreed well with the true CBF value. Although the median and Gaussian filters also reduced image noise, the mean CBF values were considerably underestimated compared with the true values. Clinical studies also suggested that the AD method was capable of reducing the image noise while preserving the quantitative accuracy of CBF images. In conclusion, the AD method appears useful for denoising DSC-MRI, which will make the CBF images generated from DSC-MRI more reliable.
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
NASA Astrophysics Data System (ADS)
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman (2008) suggested a method based on classical test theory to determine whether subscores have added value over total scores. This paper provides a literature review and reports when subscores were found to have added value for…
26 CFR 1.412(c)(2)-1 - Valuation of plan assets; reasonable actuarial valuation methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... computed by— (i) Determining the fair market value of plan assets at least annually, (ii) Adding the...) In determining the adjusted value of plan assets for a prior valuation date, there is added to the... market value, amounts are subtracted from this account and added, to the extent necessary, to raise the...
Hameed, Abdul; Zehra, Syeda T; Shah, Syed J A; Khan, Khalid M; Alharthy, Rima D; Furtmann, Norbert; Bajorath, Jürgen; Tahir, Muhammad N; Iqbal, Jamshed
2015-11-01
Cholinesterases, acetylcholinesterase (AChE) and butyrylcholinesterase (BChE), have a role in cholinergic deficit which evidently leads to Alzheimer's disease (AD). Inhibition of cholinesterases with small molecules is an attractive strategy in AD therapy. This study demonstrates synthesis of pyrido[2,3-b]pyrazines (6a-6q) series, their inhibitory activities against both cholinesterases, AChE and BChE, and molecular docking studies. The bioactivities data of pyrido[2,3-b]pyrazines showed 3-(3'-nitrophenyl)pyrido[2,3-b]pyrazine 6n a potent dual inhibitor among the series against both AChE and BChE with IC50 values of 0.466 ± 0.121 and 1.89 ± 0.05 μm, respectively. The analogues 3-(3'-methylphenyl)pyrido[2,3-b]pyrazine 6c and 3-(3'-fluorophenyl)pyrido[2,3-b]pyrazine 6f were found to be selective inhibition for BChE with IC50 values of 0.583 ± 0.052 μm and AChE with IC50 value of 0.899 ± 0.10 μm, respectively. Molecular docking studies of the active compounds suggested the putative binding modes with cholinesterases. The potent compounds among the series could potentially serves as good leads for the development of new cholinesterase inhibitors. © 2015 John Wiley & Sons A/S.
Patient-centered care as value-added service by compounding pharmacists.
McPherson, Timothy B; Fontane, Patrick E; Day, Jonathan R
2013-01-01
The term "value-added" is widely used to describe business and professional services that complement a product or service or that differentiate it from competing products and services. The objective of this study was to determine compounding pharmacists' self-perceptions of the value-added services they provide. A web-based survey method was used. Respondents' perceptions of their most important value-added service frequently fell into one of two categories: (1) enhanced pharmacist contribution to developing and implementing patient therapeutic plans and (2) providing customized medications of high pharmaceutical quality. The results were consistent with a hybrid community clinical practice model for compounding pharmacists wherein personalization of the professional relationship is the value-added characteristic.
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
ERIC Educational Resources Information Center
Isenberg, Eric; Hock, Heinrich
2011-01-01
This report presents the value-added models that will be used to measure school and teacher effectiveness in the District of Columbia Public Schools (DCPS) in the 2010-2011 school year. It updates the earlier technical report, "Measuring Value Added for IMPACT and TEAM in DC Public Schools." The earlier report described the methods used…
77 FR 34881 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-12
...-400, -400D, and -400F series airplanes. This proposed AD was prompted by reports of crown frame web... this AD to prevent complete fracture of the crown frame assembly, and consequent damage to the skin and... any of the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow...
Consumption of Added Sugars among U.S. Adults, 2005-2010
... the National School Lunch Program. Data source and methods Data from the National Health and Nutrition Examination ... percentages were estimated using Taylor Series Linearization, a method that incorporates the sample weights and sample design. ...
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2012-01-01
The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Can Value Added Add Value to Teacher Evaluation?
ERIC Educational Resources Information Center
Darling-Hammond, Linda
2015-01-01
The five thoughtful papers included in this issue of "Educational Researcher" ("ER") raise new questions about the use of value-added methods (VAMs) to estimate teachers' contributions to students' learning as part of personnel evaluation. The papers address both technical and implementation concerns, considering potential…
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
Online Visualization and Value Added Services of MERRA-2 Data at GES DISC
NASA Technical Reports Server (NTRS)
Shen, Suhung; Ostrenga, Dana M.; Vollmer, Bruce E.; Hegde, Mahabaleshwa S.; Wei, Jennifer C.; Bosilovich, Michael G.
2017-01-01
NASA climate reanalysis datasets from MERRA-2, distributed at the Goddard Earth Sciences Data and Information Services Center (GES DISC), have been used in broad research areas, such as climate variations, extreme weather, agriculture, renewable energy, and air quality, etc. The datasets contain numerous variables for atmosphere, land, and ocean, grouped into 95 products. The total archived volume is approximately 337 TB ( approximately 562K files) at the end of October 2017. Due to the large number of products and files, and large data volumes, it may be a challenge for a user to find and download the data of interest. The support team at GES DISC, working closely with the MERRA-2 science team, has created and is continuing to work on value added data services to best meet the needs of a broad user community. This presentation, using aerosol over Asia Monsoon as an example, provides an overview of the MERRA-2 data services at GES DISC, including: How to find the data? How many data access methods are provided? What are the best data access methods for me? How do download the subsetted (parameter, spatial, temporal) data and save in preferred spatial resolution and data format? How to visualize and explore the data online? In addition, we introduce a future online analytic tool designed for supporting application research, focusing on long-term hourly time-series data access and analysis.
Consumption of Added Sugar among U.S. Children and Adolescents, 2005-2008
... 130% of the poverty level. Data source and methods Data from the National Health and Nutrition Examination ... percentages were estimated using Taylor Series Linearization, a method that incorporates the sample weights and sample design. ...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-23
... Airworthiness Directives; General Electric Company (GE) CJ610 Series Turbojet Engines and CF700 Series Turbofan... adopting a new airworthiness directive (AD) for GE CJ610 series turbojet engines and CF700 turbofan engines... part 39 with a proposed AD. The proposed AD applies to GE CJ610 series turbojet engines and CF700...
A Review of Some Aspects of Robust Inference for Time Series.
1984-09-01
REVIEW OF SOME ASPECTSOF ROBUST INFERNCE FOR TIME SERIES by Ad . Dougla Main TE "iAL REPOW No. 63 Septermber 1984 Department of Statistics University of ...clear. One cannot hope to have a good method for dealing with outliers in time series by using only an instantaneous nonlinear transformation of the data...AI.49 716 A REVIEWd OF SOME ASPECTS OF ROBUST INFERENCE FOR TIME 1/1 SERIES(U) WASHINGTON UNIV SEATTLE DEPT OF STATISTICS R D MARTIN SEP 84 TR-53
Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M
2010-08-01
The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-11
... for the Member States of the European Community, has issued EASA AD 2011-0224-E, dated November 24...-200. (h) Related Information Refer to MCAI European Aviation Safety Agency (EASA) AD 2011- 0224-E... of the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
...-0691; Directorate Identifier 2011-NE-26-AD] RIN 2120-AA64 Airworthiness Directives; Lycoming Engines Model TIO 540-A Series Reciprocating Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... directive (AD) for Lycoming Engines model TIO 540-A series reciprocating engines. The existing AD, AD 71-13...
Efficient Improvement of Silage Additives by Using Genetic Algorithms
Davies, Zoe S.; Gilbert, Richard J.; Merry, Roger J.; Kell, Douglas B.; Theodorou, Michael K.; Griffith, Gareth W.
2000-01-01
The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e., no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a “fitness” value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a “cost” element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage fermentation. We found that these combinations compared favorably both with uninoculated silage and with a commercial silage additive. The evolutionary computing methods described here are a convenient and efficient approach for designing silage additives. PMID:10742224
A Nonlinear Dynamical Systems based Model for Stochastic Simulation of Streamflow
NASA Astrophysics Data System (ADS)
Erkyihun, S. T.; Rajagopalan, B.; Zagona, E. A.
2014-12-01
Traditional time series methods model the evolution of the underlying process as a linear or nonlinear function of the autocorrelation. These methods capture the distributional statistics but are incapable of providing insights into the dynamics of the process, the potential regimes, and predictability. This work develops a nonlinear dynamical model for stochastic simulation of streamflows. In this, first a wavelet spectral analysis is employed on the flow series to isolate dominant orthogonal quasi periodic timeseries components. The periodic bands are added denoting the 'signal' component of the time series and the residual being the 'noise' component. Next, the underlying nonlinear dynamics of this combined band time series is recovered. For this the univariate time series is embedded in a d-dimensional space with an appropriate lag T to recover the state space in which the dynamics unfolds. Predictability is assessed by quantifying the divergence of trajectories in the state space with time, as Lyapunov exponents. The nonlinear dynamics in conjunction with a K-nearest neighbor time resampling is used to simulate the combined band, to which the noise component is added to simulate the timeseries. We demonstrate this method by applying it to the data at Lees Ferry that comprises of both the paleo reconstructed and naturalized historic annual flow spanning 1490-2010. We identify interesting dynamics of the signal in the flow series and epochal behavior of predictability. These will be of immense use for water resources planning and management.
Analysis of production flow process with lean manufacturing approach
NASA Astrophysics Data System (ADS)
Siregar, Ikhsan; Arif Nasution, Abdillah; Prasetio, Aji; Fadillah, Kharis
2017-09-01
This research was conducted on the company engaged in the production of Fast Moving Consumer Goods (FMCG). The production process in the company are still exists several activities that cause waste. Non value added activity (NVA) in the implementation is still widely found, so the cycle time generated to make the product will be longer. A form of improvement on the production line is by applying lean manufacturing method to identify waste along the value stream to find non value added activities. Non value added activity can be eliminated and reduced by utilizing value stream mapping and identifying it with activity mapping process. According to the results obtained that there are 26% of value-added activities and 74% non value added activity. The results obtained through the current state map of the production process of process lead time value of 678.11 minutes and processing time of 173.94 minutes. While the results obtained from the research proposal is the percentage of value added time of 41% of production process activities while non value added time of the production process of 59%. While the results obtained through the future state map of the production process of process lead time value of 426.69 minutes and processing time of 173.89 minutes.
Dihydropyrimidine based hydrazine dihydrochloride derivatives as potent urease inhibitors.
Khan, Ajmal; Hashim, Jamshed; Arshad, Nuzhat; Khan, Ijaz; Siddiqui, Naureen; Wadood, Abdul; Ali, Muzaffar; Arshad, Fiza; Khan, Khalid Mohammed; Choudhary, M Iqbal
2016-02-01
Four series of heterocyclic compounds 4-dihydropyrimidine-2-thiones 7-12 (series A), N,S-dimethyl-dihydropyrimidines 13-18 (series B), hydrazine derivatives of dihydropyrimidine 19-24 (series C), and tetrazolo dihydropyrimidine derivatives 25-30 (series D), were synthesized and evaluated for in vitro urease inhibitory activity. The series B-D were first time examined for urease inhibition. Series A and C were found to be significantly active with IC50 values between 34.7-42.9 and 15.0-26.0 μM, respectively. The structure-activity relationship showed that the free S atom and hydrazine moiety are the key pharmacophores against urease enzyme. The kinetic studies of the active series A (7-12) and C (19-24) were carried out to determine their modes of inhibition and dissociation constants Ki. Compounds of series A (7-12) and series C (19-24) showed a mixed-type of inhibition with Ki values ranging between 15.76-25.66 and 14.63-29.42 μM, respectively. The molecular docking results showed that all the active compounds of both series have significant binding interactions with the active sites specially Ni-ion of the urease enzyme. Cytotoxicity of all series A-D was also evaluated against mammalian mouse fibroblast 3T3 cell lines, and no toxicity was observed in cellular model. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Heinemeier, Jan; Jungner, Högne; Lindroos, Alf; Ringbom, Åsa; von Konow, Thorborg; Rud, Niels
1997-03-01
A method for refining lime mortar samples for 14C dating has been developed. It includes mechanical and chemical separation of mortar carbonate with optical control of the purity of the samples. The method has been applied to a large series of AMS datings on lime mortar from three medieval churches on the Åland Islands, Finland. The datings show convincing internal consistency and confine the construction time of the churches to AD 1280-1380 with a most probable date just before AD 1300. We have also applied the method to the controversial Newport Tower, Rhode Island, USA. Our mortar datings confine the building to colonial time in the 17th century and thus refute claims of Viking origin of the tower. For the churches, a parallel series of datings of organic (charcoal) inclusions in the mortar show less reliable results than the mortar samples, which is ascribed to poor association with the construction time.
ERIC Educational Resources Information Center
National Center on Performance Incentives, 2008
2008-01-01
In "Value-Added and Other Methods for Measuring School Performance: An Analysis of Performance Measurement Strategies in Teacher Incentive Fund Proposals"--a paper presented at the February 2008 National Center on Performance Incentives research to policy conference--Robert Meyer and Michael Christian examine select performance-pay plans…
ERIC Educational Resources Information Center
Nelson, Brian; Nugent, Rebecca; Rupp, Andre A.
2012-01-01
This special issue of "JEDM" was dedicated to bridging work done in the disciplines of "educational and psychological assessment" and "educational data mining" (EDM) via the assessment design and implementation framework of "evidence-centered design" (ECD). It consisted of a series of five papers: one…
Essays on School Quality and Student Outcomes
ERIC Educational Resources Information Center
Crispin, Laura M.
2012-01-01
In my first chapter, I explore the relationship between school size and student achievement where, conditional on observable educational inputs, school size is a proxy for factors that are difficult to measure directly ( e.g., school climate and organization). Using data from the NELS:88, I estimate a series of value-added education production…
Research on the Characteristics of Alzheimer's Disease Using EEG
NASA Astrophysics Data System (ADS)
Ueda, Taishi; Musha, Toshimitsu; Yagi, Tohru
In this paper, we proposed a new method for diagnosing Alzheimer's disease (AD) on the basis of electroencephalograms (EEG). The method, which is termed Power Variance Function (PVF) method, indicates the variance of the power at each frequency. By using the proposed method, the power of EEG at each frequency was calculated using Wavelet transform, and the corresponding variances were defined as PVF. After the PVF histogram of 55 healthy people was approximated as a Generalized Extreme Value (GEV) distribution, we evaluated the PVF of 22 patients with AD and 25 patients with mild cognitive impairment (MCI). As a result, the values for all AD and MCI subjects were abnormal. In particular, the PVF in the θ band for MCI patients was abnormally high, and the PVF in the α band for AD patients was low.
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
An information-theoretical perspective on weighted ensemble forecasts
NASA Astrophysics Data System (ADS)
Weijs, Steven V.; van de Giesen, Nick
2013-08-01
This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.
An Ad-Hoc Adaptive Pilot Model for Pitch Axis Gross Acquisition Tasks
NASA Technical Reports Server (NTRS)
Hanson, Curtis E.
2012-01-01
An ad-hoc algorithm is presented for real-time adaptation of the well-known crossover pilot model and applied to pitch axis gross acquisition tasks in a generic fighter aircraft. Off-line tuning of the crossover model to human pilot data gathered in a fixed-based high fidelity simulation is first accomplished for a series of changes in aircraft dynamics to provide expected values for model parameters. It is shown that in most cases, for this application, the traditional crossover model can be reduced to a gain and a time delay. The ad-hoc adaptive pilot gain algorithm is shown to have desirable convergence properties for most types of changes in aircraft dynamics.
NASA Astrophysics Data System (ADS)
Soares, P. M. M.; Cardoso, R. M.
2017-12-01
Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.
A Brainnetome Atlas Based Mild Cognitive Impairment Identification Using Hurst Exponent
Long, Zhuqing; Jing, Bin; Guo, Ru; Li, Bo; Cui, Feiyi; Wang, Tingting; Chen, Hongwen
2018-01-01
Mild cognitive impairment (MCI), which generally represents the transition state between normal aging and the early changes related to Alzheimer’s disease (AD), has drawn increasing attention from neuroscientists due that efficient AD treatments need early initiation ahead of irreversible brain tissue damage. Thus effective MCI identification methods are desperately needed, which may be of great importance for the clinical intervention of AD. In this article, the range scaled analysis, which could effectively detect the temporal complexity of a time series, was utilized to calculate the Hurst exponent (HE) of functional magnetic resonance imaging (fMRI) data at a voxel level from 64 MCI patients and 60 healthy controls (HCs). Then the average HE values of each region of interest (ROI) in brainnetome atlas were extracted and compared between MCI and HC. At last, the abnormal average HE values were adopted as the classification features for a proposed support vector machine (SVM) based identification algorithm, and the classification performance was estimated with leave-one-out cross-validation (LOOCV). Our results indicated 83.1% accuracy, 82.8% sensitivity and 83.3% specificity, and an area under curve of 0.88, suggesting that the HE index could serve as an effective feature for the MCI identification. Furthermore, the abnormal HE brain regions in MCI were predominately involved in left middle frontal gyrus, right hippocampus, bilateral parahippocampal gyrus, bilateral amygdala, left cingulate gyrus, left insular gyrus, left fusiform gyrus, left superior parietal gyrus, left orbital gyrus and left basal ganglia. PMID:29692721
Method for conversion of carbohydrate polymers to value-added chemical products
Zhang, Zongchao C [Norwood, NJ; Brown, Heather M [Kennewick, WA; Su, Yu [Richland, WA
2012-02-07
Methods are described for conversion of carbohydrate polymers in ionic liquids, including cellulose, that yield value-added chemicals including, e.g., glucose and 5-hydroxylmethylfurfural (HMF) at temperatures below 120.degree. C. Catalyst compositions that include various mixed metal halides are described that are selective for specified products with yields, e.g., of up to about 56% in a single step process.
Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models
ERIC Educational Resources Information Center
Isenberg, Eric; Walsh, Elias
2015-01-01
We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…
Numazawa, M; Yoshimura, A; Oshibe, M
1998-01-01
To gain insight into the relationships between the aromatase inhibitory activity of 6-alkyl-substituted androgens, potent competitive inhibitors, and their ability to serve as a substrate of aromatase, we studied the aromatization of a series of 6alpha- and 6beta-alkyl (methyl, ethyl, n-propyl, n-pentyl and n-heptyl)-substituted androst-4-ene-3,17-diones (ADs) and their androsta-1,4-diene-3,17-dione (ADD) derivatives with human placental aromatase, by gas chromatography-mass spectrometry. Among the inhibitors examined, ADD and its 6alpha-alkyl derivatives with alkyl functions less than three carbons long, together with 6beta-methyl ADD, are suicide substrates of aromatase. All of the steroids, except for 6beta-n-pentyl ADD and its n-heptyl analogue as well as 6beta-n-heptyl AD, were found to be converted into the corresponding 6-alkyl oestrogens. The 6-methyl steroids were aromatized most efficiently in each series, and the aromatization rate essentially decreased in proportion to the length of the 6-alkyl chains in each series, where the 6alpha-alkyl androgens were more efficient substrates than the corresponding 6beta isomers. The Vmax of 6alpha-methyl ADD was approx. 2.5-fold that of the natural substrate AD and approx. 3-fold that of the parent ADD. On the basis of this, along with the facts that the rates of a mechanism-based inactivation of aromatase by ADD and its 6alpha-methyl derivative are similar, it is implied that alignment of 6alpha-methyl ADD in the active site could favour the pathway leading to oestrogen over the inactivation pathway, compared with that of ADD. The relative apparent Km values for the androgens obtained in this study are different from the relative Ki values obtained previously, indicating that there is a difference between the ability to serve as an inhibitor and the ability to serve as a substrate in the 6-alkyl androgen series. PMID:9405288
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Added Value of Assessing Adnexal Masses with Advanced MRI Techniques
Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.
2015-01-01
This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542
How Often Do Subscores Have Added Value? Results from Operational and Simulated Data
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman suggested a method based on classical test theory to determine whether subscores have added value over total scores. In this article I first provide a rich collection of results regarding when subscores were found to have added…
Advertising family planning in the press: direct response results from Bangladesh.
Harvey, P D
1984-01-01
In 1977 and again in 1982, a series of couponed ads were run in three major Bangladeshi newspapers to test the relative effectiveness of different family planning themes. The ads offered a free booklet about methods of family planning (1977) or "detailed information on contraceptives" (1982) in the context of family health, the wife's happiness, the children's future, and family economics. The most effective ads, by a highly significant margin, were those stressing the importance of family economics (food and shelter) and the children's (sons') future. The least effective ads stressed the benefits of family planning for the wife.
Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea
NASA Astrophysics Data System (ADS)
Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju
2014-08-01
A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.
Araújo, Ricardo de A
2010-12-01
This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bayesian Methods for Scalable Multivariate Value-Added Assessment
ERIC Educational Resources Information Center
Lockwood, J. R.; McCaffrey, Daniel F.; Mariano, Louis T.; Setodji, Claude
2007-01-01
There is increased interest in value-added models relying on longitudinal student-level test score data to isolate teachers' contributions to student achievement. The complex linkage of students to teachers as students progress through grades poses both substantive and computational challenges. This article introduces a multivariate Bayesian…
Prognostic Value of Facial Nerve Antidromic Evoked Potentials in Bell Palsy: A Preliminary Study
WenHao, Zhang; Minjie, Chen; Chi, Yang; Weijie, Zhang
2012-01-01
To analyze the value of facial nerve antidromic evoked potentials (FNAEPs) in predicting recovery from Bell palsy. Study Design. Retrospective study using electrodiagnostic data and medical chart review. Methods. A series of 46 patients with unilateral Bell palsy treated were included. According to taste test, 26 cases were associated with taste disorder (Group 1) and 20 cases were not (Group 2). Facial function was established clinically by the Stennert system after monthly follow-up. The result was evaluated with clinical recovery rate (CRR) and FNAEP. FNAEPs were recorded at the posterior wall of the external auditory meatus of both sides. Results. Mean CRR of Group 1 and Group 2 was 61.63% and 75.50%. We discovered a statistical difference between two groups and also in the amplitude difference (AD) of FNAEP. Mean ± SD of AD was −6.96% ± 12.66% in patients with excellent result, −27.67% ± 27.70% with good result, and −66.05% ± 31.76% with poor result. Conclusions. FNAEP should be monitored in patients with intratemporal facial palsy at the early stage. FNAEP at posterior wall of external auditory meatus was sensitive to detect signs of taste disorder. There was close relativity between FNAEPs and facial nerve recovery. PMID:22164176
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
Extended space expectation values in quantum dynamical system evolutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demiralp, Metin
2014-10-06
The time variant power series expansion for the expectation value of a given quantum dynamical operator is well-known and well-investigated issue in quantum dynamics. However, depending on the operator and Hamiltonian singularities this expansion either may not exist or may not converge for all time instances except the beginning of the evolution. This work focuses on this issue and seeks certain cures for the negativities. We work in the extended space obtained by adding all images of the initial wave function under the system Hamiltonian’s positive integer powers. This requires the introduction of certain appropriately defined weight operators. The resultingmore » better convergence in the temporal power series urges us to call the new defined entities “extended space expectation values” even though they are constructed over certain weight operators and are somehow pseudo expectation values.« less
Long-term persistence of solar activity
NASA Technical Reports Server (NTRS)
Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul
1994-01-01
We examine the question of whether or not the non-periodic variations in solar activity are caused by a white-noise, random process. The Hurst exponent, which characterizes the persistence of a time series, is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD. We find a constant Hurst exponent, suggesting that solar activity in the frequency range from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process and that it is the same type of process over a wide range of time interval lengths.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Culqui, D R; Linares, C; Ortiz, C; Carmona, R; Díaz, J
2017-08-15
There are scarce studies of time series that analysed the short-term association between emergency hospital admissions due to Alzheimer's disease (AD) and environmental factors. The objective is to analyse the effect of heat waves, noise and air pollutants on urgent hospital admissions due to AD in Madrid. Longitudinal ecological time series study was performed. The dependent variable was the emergency AD hospital admissions occurred in Madrid during the period 2001-2009. Independent variables were: Daily mean concentrations (μg/m3) of air pollutants (PM 2.5 and PM 10 ; O3 and NO2); maximum daily temperature (°C) and daily and night noise levels (dB(A)). Relative Risk (RR) for an increment in interquartile range, and Attributable Risk (AR) values were calculated through GLM with Poisson link. Our findings indicated that only PM 2.5 concentrations at lag 2 with a RR: 1.38 (95% CI: 1.15-1.65); AR 27.5% (95% CI: 13.0-39.4); and heat wave days at lag 3 with a RR: 1.30 (95% CI: 1.12-1.52); AR 23.1% (95% CI: 10.7-34.2) were associated with AD hospital admissions. A reduction in AD patients' exposure levels to PM 2.5 and special care of such patients during heat wave periods could result in a decrease in both emergency AD admissions and the related health care costs. Copyright © 2017 Elsevier B.V. All rights reserved.
Adding Resistances and Capacitances in Introductory Electricity
NASA Astrophysics Data System (ADS)
Efthimiou, C. J.; Llewellyn, R. A.
2005-09-01
All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.
Can Value-Added Measures of Teacher Performance Be Trusted?
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.
2015-01-01
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…
ERIC Educational Resources Information Center
Parmelee, John H.; Perkins, Stephynie C.; Sayre, Judith J.
2007-01-01
This study uses a sequential transformative mixed methods research design to explain how political advertising fails to engage college students. Qualitative focus groups examined how college students interpret the value of political advertising to them, and a quantitative manifest content analysis concerning ad framing of more than 100 ads from…
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Flood frequency analysis - the challenge of using historical data
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn
2015-04-01
Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.
NASA Astrophysics Data System (ADS)
Dobrovolný, Petr; Brázdil, Rudolf; Kotyza, Oldřich; Valášek, Hubert
2010-05-01
Series of temperature and precipitation indices (in ordinal scale) based on interpretation of various sources of documentary evidence (e.g. narrative written reports, visual daily weather records, personal correspondence, special prints, official economic records, etc.) are used as predictors in the reconstruction of mean seasonal temperatures and seasonal precipitation totals for the Czech Lands from A.D. 1500. Long instrumental measurements from 1771 (temperatures) and 1805 (precipitation) are used as a target values to calibrate and verify documentary-based index series. Reconstruction is based on linear regression with variance and mean adjustments. Reconstructed series were compared with similar European documentary-based reconstructions as well as with reconstructions based on different natural proxies. Reconstructed series were analyzed with respect to trends on different time-scales and occurrence of extreme values. We discuss uncertainties typical for documentary evidence from historical archives. Besides the fact that reports on weather and climate in documentary archives cover all seasons, our reconstructions provide the best results for winter temperatures and summer precipitation. However, explained variance for these seasons is comparable to other existing reconstructions for Central Europe.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-29
... of PHLX FOREX Options may be added consistent with the timing described above for new series of index... prior to the expiration date. Under proposed Rule 1006C, the closing settlement value for PHLX FOREX Options and for the FLEX PHLX FOREX Options on the currencies listed in the rule shall be the spot market...
ERIC Educational Resources Information Center
Gabriel, Rachael; Lester, Jessica Nina
2013-01-01
Background/Context: This paper illustrates how the media, particularly "The LA Times," entered the debate surrounding teacher evaluation, resulting in a storyline that shaped how the public perceives teacher effectiveness. With their series of articles in 2010, "The LA Times" entered the conversation about the place and value…
ERIC Educational Resources Information Center
Swail, Watson Scott
2004-01-01
Rarely do stakeholders ask about the effectiveness of outreach programs or whether they are an efficient use of tax dollars and philanthropic funds. As government budgets continue to be constrained and philanthropic investment gets more competitive, there is a growing acknowledgment of the need to look at the cost/benefit of these programs and…
ERIC Educational Resources Information Center
Meyer, Robert; Carl, Bradley; Cheng, Huiping Emily
2010-01-01
This report summarizes work conducted to date through the Senior Urban Education Research Fellowship (SUERF) awarded by the Council of the Great City Schools to the Value-Added Research Center (VARC) at the University of Wisconsin-Madison for work in the Milwaukee Public Schools (MPS). VARC has utilized its Fellowship award, entitled…
ERIC Educational Resources Information Center
Collins, Clarin
2014-01-01
This study examined the SAS Education Value-Added Assessment System (EVAAS®) in practice, as perceived and experienced by teachers in the Southwest School District (SSD). To evaluate teacher effectiveness, SSD is using SAS EVAAS® for high-stakes consequences more than any other district or state in the country. A mixed-method design including a…
Using Corporate-Based Methods To Assess Technical Communication Programs.
ERIC Educational Resources Information Center
Faber, Brenton; Bekins, Linn; Karis, Bill
2002-01-01
Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
VizieR Online Data Catalog: Evolution of solar irradiance during Holocene (Vieira+, 2011)
NASA Astrophysics Data System (ADS)
Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.
2011-05-01
This is a composite total solar irradiance (TSI) time series for 9495BC to 2007AD constructed as described in Sect. 3.3 of the paper. Since the TSI is the main external heat input into the Earth's climate system, a consistent record covering as long period as possible is needed for climate models. This was our main motivation for constructing this composite TSI time series. In order to produce a representative time series, we divided the Holocene into four periods according to the available data for each period. Table 4 (see below) summarizes the periods considered and the models available for each period. After the end of the Maunder Minimum we compute daily values, while prior to the end of the Maunder Minimum we compute 10-year averages. For the period for which both solar disk magnetograms and continuum images are available (period 1) we employ the SATIRE-S reconstruction (Krivova et al. 2003A&A...399L...1K; Wenzler et al. 2006A&A...460..583W). SATIRE-T (Krivova et al. 2010JGRA..11512112K) reconstruction is used from the beginning of the Maunder Minimum (approximately 1640AD) to 1977AD. Prior to 1640AD reconstructions are based on cosmogenic isotopes (this paper). Different models of the Earth's geomagnetic field are available before and after approximately 5000BC. Therefore we treat periods 3 and 4 (before and after 5000BC) separately. Further details can be found in the paper. We emphasize that the reconstructions based on different proxies have different time resolutions. (1 data file).
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Anding, Ralf; Rosier, Peter; Smith, Phillip; Gammie, Andrew; Giarenis, Ilias; Rantell, Angela; Thiruchelvam, Nikesh; Arlandis, Salvador; Cardozo, Linda
2016-02-01
To debate and evaluate the evidence base regarding the added value of video to urodynamics in adults and to define research questions. In the ICI-RS Meeting 2014 a Think Tank analyzed the current guidelines recommending video urodynamics (VUD) and performed a literature search to determine the level of evidence for the additional value of the imaging with urodynamic assessment of both neurogenic and non-neurogenic lower urinary tract dysfunction. Current guidelines do not specify the added value of imaging to urodynamics. Recommendations are based on single center series and expert opinion. Standard imaging protocols are not available and evidence regarding the balance between number and timing of pictures, patient positioning, and exposure time on the one hand and diagnosis on the other hand is lacking. On the basis of expert consensus VUD is relevant in the follow-up of patients with spinal dysraphism. Evidence for the value of VUD in non-neurogenic lower urinary tract dysfunction is sparse. There is some evidence that VUD is not necessary in uncomplicated female SUI, but expert opinion suggests it might improve the evaluation of patients with recurrent SUI. There is only low level evidence for the addition of video to urodynamics. The ICI-RS Think Tank encourages better reporting of results of imaging and systematic reporting of X-ray doses. Specific research hypotheses regarding the added value of imaging are recommended. The panel suggests the development of standards for technically optimal VUD that is practically achievable with machines that are on the market. © 2016 Wiley Periodicals, Inc.
"Value Added" Gauge of Teaching Probed
ERIC Educational Resources Information Center
Viadero, Debra
2009-01-01
A new study by a public and labor economist suggests that "value added" methods for determining the effectiveness of classroom teachers are built on some shaky assumptions and may be misleading. The study, due to be published in February in the "Quarterly Journal of Economics," is the first of a handful of papers now in the…
ERIC Educational Resources Information Center
Perry, Thomas
2017-01-01
Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…
Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.
2012-01-01
We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…
Methods for conversion of carbohydrates in ionic liquids to value-added chemicals
Zhao, Haibo [The Woodlands, TX; Holladay, Johnathan E [Kennewick, WA; Zhang, Zongchao C [Norwood, NJ
2011-05-10
Methods are described for converting carbohydrates including, e.g., monosaccharides, disaccharides, and polysaccharides in ionic liquids to value-added chemicals including furans, useful as chemical intermediates and/or feedstocks. Fructose is converted to 5-hydroxylmethylfurfural (HMF) in the presence of metal halide and acid catalysts. Glucose is effectively converted to HMF in the presence of chromium chloride catalysts. Yields of up to about 70% are achieved with low levels of impurities such as levulinic acid.
METHOD OF MAKING ALLOYS OF SECOND RARE EARTH SERIES METALS
Baker, R.D.; Hayward, B.R.
1963-01-01
>This invention relates to a process for alloying the second rare earth series metals with Mo, Nb, or Zr. A halide of the rare earth metal is mixed with about 1 to 20 at.% of an oxide of Mo, Nb, or Zr. Iodine and an alkali or alkaline earth metal are added, and the resulting mixture is heated in an inert atmosphere to 350 deg C. (AEC)
Network structure of multivariate time series.
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-21
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
77 FR 6685 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-09
... proposed AD reduces compliance times for Model 767-400ER series airplanes. In addition, this proposed AD...). This proposed AD would reduce the compliance times for Model 767-400ER series airplanes. In addition... airplanes, the existing AD also requires a one- time inspection to determine if a tool runout option has...
Xing, Jian; Burkom, Howard; Tokars, Jerome
2011-12-01
Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.
de Carvalho, Wellington Roberto Gomes; de Moraes, Anderson Marques; Roman, Everton Paulo; Santos, Keila Donassolo; Medaets, Pedro Augusto Rodrigues; Veiga-Junior, Nélio Neves; Coelho, Adrielle Caroline Lace de Moraes; Krahenbühl, Tathyane; Sewaybricker, Leticia Esposito; Barros-Filho, Antonio de Azevedo; Morcillo, Andre Moreno; Guerra-Júnior, Gil
2015-01-01
Aims To establish normative data for phalangeal quantitative ultrasound (QUS) measures in Brazilian students. Methods The sample was composed of 6870 students (3688 females and 3182 males), aged 6 to 17 years. The bone status parameter, Amplitude Dependent Speed of Sound (AD-SoS) was assessed by QUS of the phalanges using DBM Sonic BP (IGEA, Carpi, Italy) equipment. Skin color was obtained by self-evaluation. The LMS method was used to derive smoothed percentiles reference charts for AD-SoS according to sex, age, height and weight and to generate the L, M, and S parameters. Results Girls showed higher AD-SoS values than boys in the age groups 7–16 (p<0.001). There were no differences on AD-SoS Z-scores according to skin color. In both sexes, the obese group showed lower values of AD-SoS Z-scores compared with subjects classified as thin or normal weight. Age (r2 = 0.48) and height (r2 = 0.35) were independent predictors of AD-SoS in females and males, respectively. Conclusion AD-SoS values in Brazilian children and adolescents were influenced by sex, age and weight status, but not by skin color. Our normative data could be used for monitoring AD-SoS in children or adolescents aged 6–17 years. PMID:26043082
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A
Diagnostic value of immunoglobulin κ light chain gene rearrangement analysis in B-cell lymphomas.
Kokovic, Ira; Jezersek Novakovic, Barbara; Novakovic, Srdjan
2015-03-01
Analysis of the immunoglobulin κ light chain (IGK) gene is an alternative method for B-cell clonality assessment in the diagnosis of mature B-cell proliferations in which the detection of clonal immunoglobulin heavy chain (IGH) gene rearrangements fails. The aim of the present study was to evaluate the added value of standardized BIOMED-2 assay for the detection of clonal IGK gene rearrangements in the diagnostic setting of suspected B-cell lymphomas. With this purpose, 92 specimens from 80 patients with the final diagnosis of mature B-cell lymphoma (37 specimens), mature T-cell lymphoma (26 specimens) and reactive lymphoid proliferation (29 specimens) were analyzed for B-cell clonality. B-cell clonality analysis was performed using the BIOMED-2 IGH and IGK gene clonality assays. The determined sensitivity of the IGK assay was 67.6%, while the determined sensitivity of the IGH assay was 75.7%. The sensitivity of combined IGH+IGK assay was 81.1%. The determined specificity of the IGK assay was 96.2% in the group of T-cell lymphomas and 96.6% in the group of reactive lesions. The determined specificity of the IGH assay was 84.6% in the group of lymphomas and 86.2% in the group of reactive lesions. The comparison of GeneScan (GS) and heteroduplex pretreatment-polyacrylamide gel electrophoresis (HD-PAGE) methods for the analysis of IGK gene rearrangements showed a higher efficacy of GS analysis in a series of 27 B-cell lymphomas analyzed by both methods. In the present study, we demonstrated that by applying the combined IGH+IGK clonality assay the overall detection rate of B-cell clonality was increased by 5.4%. Thus, we confirmed the added value of the standardized BIOMED-2 IGK assay for assessment of B-cell clonality in suspected B-cell lymphomas with inconclusive clinical and cyto/histological diagnosis.
Application of external axis in thermal spraying
NASA Astrophysics Data System (ADS)
Gao, Guoyou; Wang, Wei; Chen, Tao; Hui, Chun
2018-05-01
Industrial robots are widely used nowadays in the process of thermal spraying, human work can be largely replaced due to the high-efficient, security, precision and repeatability of industrial robot. As offering the convenience to industrial product, Robots have some natural deficiencies because of its mechanical linkages of six-axis. When robot performs a series of stage of production, it could be hard to move to the next one because one of his axis reaches a maximum value. For this reason, external axis is added to robot system to extend the reachable space of robot axis. This paper concerns to the application of external axis and the different methods of programming the robot with work-holding external axis in the virtual environment. Experiments demonstrate the coating layer on the regular workpiece is uniform.
Wallace, Jason A; Shen, Jana K
2012-11-14
Recent development of constant pH molecular dynamics (CpHMD) methods has offered promise for adding pH-stat in molecular dynamics simulations. However, until now the working pH molecular dynamics (pHMD) implementations are dependent in part or whole on implicit-solvent models. Here we show that proper treatment of long-range electrostatics and maintaining charge neutrality of the system are critical for extending the continuous pHMD framework to the all-atom representation. The former is achieved here by adding forces to titration coordinates due to long-range electrostatics based on the generalized reaction field method, while the latter is made possible by a charge-leveling technique that couples proton titration with simultaneous ionization or neutralization of a co-ion in solution. We test the new method using the pH-replica-exchange CpHMD simulations of a series of aliphatic dicarboxylic acids with varying carbon chain length. The average absolute deviation from the experimental pK(a) values is merely 0.18 units. The results show that accounting for the forces due to extended electrostatics removes the large random noise in propagating titration coordinates, while maintaining charge neutrality of the system improves the accuracy in the calculated electrostatic interaction between ionizable sites. Thus, we believe that the way is paved for realizing pH-controlled all-atom molecular dynamics in the near future.
Wallace, Jason A.; Shen, Jana K.
2012-01-01
Recent development of constant pH molecular dynamics (CpHMD) methods has offered promise for adding pH-stat in molecular dynamics simulations. However, until now the working pH molecular dynamics (pHMD) implementations are dependent in part or whole on implicit-solvent models. Here we show that proper treatment of long-range electrostatics and maintaining charge neutrality of the system are critical for extending the continuous pHMD framework to the all-atom representation. The former is achieved here by adding forces to titration coordinates due to long-range electrostatics based on the generalized reaction field method, while the latter is made possible by a charge-leveling technique that couples proton titration with simultaneous ionization or neutralization of a co-ion in solution. We test the new method using the pH-replica-exchange CpHMD simulations of a series of aliphatic dicarboxylic acids with varying carbon chain length. The average absolute deviation from the experimental pKa values is merely 0.18 units. The results show that accounting for the forces due to extended electrostatics removes the large random noise in propagating titration coordinates, while maintaining charge neutrality of the system improves the accuracy in the calculated electrostatic interaction between ionizable sites. Thus, we believe that the way is paved for realizing pH-controlled all-atom molecular dynamics in the near future. PMID:23163362
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
Kumar, Amit; Pintus, Francesca; Di Petrillo, Amalia; Medda, Rosaria; Caria, Paola; Matos, Maria João; Viña, Dolores; Pieroni, Enrico; Delogu, Francesco; Era, Benedetta; Delogu, Giovanna L; Fais, Antonella
2018-03-13
Alzheimer's disease (AD) is a neurodegenerative disorder representing the leading cause of dementia and is affecting nearly 44 million people worldwide. AD is characterized by a progressive decline in acetylcholine levels in the cholinergic systems, which results in severe memory loss and cognitive impairments. Expression levels and activity of butyrylcholinesterase (BChE) enzyme has been noted to increase significantly in the late stages of AD, thus making it a viable drug target. A series of hydroxylated 2-phenylbenzofurans compounds were designed, synthesized and their inhibitory activities toward acetylcholinesterase (AChE) and BChE enzymes were evaluated. Two compounds (15 and 17) displayed higher inhibitory activity towards BChE with IC 50 values of 6.23 μM and 3.57 μM, and a good antioxidant activity with EC 50 values 14.9 μM and 16.7 μM, respectively. The same compounds further exhibited selective inhibitory activity against BChE over AChE. Computational studies were used to compare protein-binding pockets and evaluate the interaction fingerprints of the compound. Molecular simulations showed a conserved protein residue interaction network between the compounds, resulting in similar interaction energy values. Thus, combination of biochemical and computational approaches could represent rational guidelines for further structural modification of these hydroxy-benzofuran derivatives as future drugs for treatment of AD.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
.... ALF502 series and LF507 series turbofan engines with certain fuel manifold assemblies installed. That AD... of certain part number (P/N) fuel manifold assemblies for cracks, and replacement of cracked fuel manifolds with serviceable manifolds. This AD continues to require inspecting those fuel manifolds for...
ERIC Educational Resources Information Center
Bassiri, Dina
2015-01-01
The 2001 reauthorization of the Elementary and Secondary Education Act (ESEA) known as No Child Left Behind and more recent federal initiatives such as Race to the Top and the ESEA flexibility waiver have brought student growth to the forefront of education reform for assessing school and teacher effectiveness. This study examined growth…
Rocky Mountain Research Station USDA Forest Service
2007-01-01
Large fires can result in a series of disasters for individuals and communities in the wildland-urban interface. They create significant disruptions to ongoing social processes, result in large financial losses, and lead to expensive restoration activities. By being aware of the impacts of wildland fire on local residents, fire managers can bring added value to them...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, J; Zhang, Q; Tilp, A
Significantly improved returns in their aerosol chemistry data can be achieved via the development of a value-added product (VAP) of deriving OA components, called Organic Aerosol Components (OACOMP). OACOMP is primarily based on multivariate analysis of the measured organic mass spectral matrix. The key outputs of OACOMP are the concentration time series and the mass spectra of OA factors that are associated with distinct sources, formation and evolution processes, and physicochemical properties.
ERIC Educational Resources Information Center
Pennings, Helena J. M.
2017-01-01
In the present study, complex dynamic systems theory and interpersonal theory are combined to describe the teacher-student interactions of two teachers with different interpersonal styles. The aim was to show and explain the added value of looking at different steps in the analysis of behavioral time-series data (i.e., observations of teacher and…
NASA Astrophysics Data System (ADS)
Sembiring, N.; Nasution, A. H.
2018-02-01
Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.
ERIC Educational Resources Information Center
Kennedy, Kate; Peters, Mary; Thomas, Mike
2012-01-01
Value-added analysis is the most robust, statistically significant method available for helping educators quantify student progress over time. This powerful tool also reveals tangible strategies for improving instruction. Built around the work of Battelle for Kids, this book provides a field-tested continuous improvement model for using…
NASA Astrophysics Data System (ADS)
Agustinus, E. T. S.
2018-02-01
Indonesia's position on the path of ring of fire makes it rich in mineral resources. Nevertheless, in the past, the exploitation of Indonesian mineral resources was uncontrolled resulting in environmental degradation and marginal reserves. Exploitation of excessive mineral resources is very detrimental to the state. Reflecting on the occasion, the management and utilization of Indonesia's mineral resources need to be good in mining practice. The problem is how to utilize the mineral reserve resources effectively and efficiently. Utilization of marginal reserves requires new technologies and processing methods because the old processing methods are inadequate. This paper gives a result of Multi Blending Technology (MBT) Method. The underlying concept is not to do the extraction or refinement but processing through the formulation of raw materials by adding an additive and produce a new material called functional materials. Application of this method becomes important to be summarized into a scientific paper in a book form, so that the information can spread across multiple print media and become focused on and optimized. This book is expected to be used as a reference for stakeholder providing added value to environmentally marginal reserves in Indonesia. The conclusions are that Multi Blending Technology (MBT) Method can be used as a strategy to increase added values effectively and efficiently to marginal reserve minerals and that Multi Blending Technology (MBT) method has been applied to forsterite, Atapulgite Synthesis, Zeoceramic, GEM, MPMO, SMAC and Geomaterial.
A modified method of 3D-SSP analysis for amyloid PET imaging using [¹¹C]BF-227.
Kaneta, Tomohiro; Okamura, Nobuyuki; Minoshima, Satoshi; Furukawa, Katsutoshi; Tashiro, Manabu; Furumoto, Shozo; Iwata, Ren; Fukuda, Hiroshi; Takahashi, Shoki; Yanai, Kazuhiko; Kudo, Yukitsuka; Arai, Hiroyuki
2011-12-01
Three-dimensional stereotactic surface projection (3D-SSP) analyses have been widely used in dementia imaging studies. However, 3D-SSP sometimes shows paradoxical results on amyloid positron emission tomography (PET) analyses. This is thought to be caused by errors in anatomical standardization (AS) based on an (18)F-fluorodeoxyglucose (FDG) template. We developed a new method of 3D-SSP analysis for amyloid PET imaging, and used it to analyze (11)C-labeled 2-(2-[2-dimethylaminothiazol-5-yl]ethenyl)-6-(2-[fluoro]ethoxy)benzoxazole (BF-227) PET images of subjects with mild cognitive impairment (MCI) and Alzheimer's disease (AD). The subjects were 20 with MCI, 19 patients with AD, and 17 healthy controls. Twelve subjects with MCI were followed up for 3 years or more, and conversion to AD was seen in 6 cases. All subjects underwent PET with both FDG and BF-227. For AS and 3D-SSP analyses of PET data, Neurostat (University of Washington, WA, USA) was used. Method 1 involves AS for BF-227 images using an FDG template. In this study, we developed a new method (Method 2) for AS: First, an FDG image was subjected to AS using an FDG template. Then, the BF-227 image of the same patient was registered to the FDG image, and AS was performed using the transformation parameters calculated for AS of the corresponding FDG images. Regional values were normalized by the average value obtained at the cerebellum and values were calculated for the frontal, parietal, temporal, and occipital lobes. For statistical comparison of the 3 groups, we applied one-way analysis of variance followed by the Bonferroni post hoc test. For statistical comparison between converters and non-converters, the t test was applied. Statistical significance was defined as p < 0.05. Among the 56 cases we studied, Method 1 demonstrated slight distortions after AS of the image in 16 cases and heavy distortions in 4 cases in which the distortions were not observed with Method 2. Both methods demonstrated that the values in AD and MCI patients were significantly higher than those in the controls, in the parietal, temporal, and occipital lobes. However, only Method 2 showed significant differences in the frontal lobes. In addition, Method 2 could demonstrate a significantly higher value in MCI-to-AD converters in the parietal and frontal lobes. Method 2 corrects AS errors that often occur when using Method 1, and has made appropriate 3D-SSP analysis of amyloid PET imaging possible. This new method of 3D-SSP analysis for BF-227 PET could prove useful for detecting differences between normal groups and AD and MCI groups, and between converters and non-converters.
77 FR 67267 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... (AD) for certain Bombardier, Inc. Model CL-600-2C10 (Regional Jet Series 700, 701, & 702) airplanes, Model CL-600-2D15 (Regional Jet Series 705) airplanes, Model CL-600-2D24 (Regional Jet Series 900) airplanes, and Model CL- 600-2E25 (Regional Jet Series 1000) airplanes. This AD was prompted by a report...
ERIC Educational Resources Information Center
Coryn, Chris L. S.; Schroter, Daniela C.; Hanssen, Carl E.
2009-01-01
Brinkerhoff's Success Case Method (SCM) was developed with the specific purpose of assessing the impact of organizational interventions (e.g., training and coaching) on business goals by analyzing extreme groups using case study techniques and storytelling. As an efficient and cost-effective method of evaluative inquiry, SCM is attractive in other…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
... Airworthiness Directives; Rolls-Royce plc (RR) RB211-524 Series and RB211 Trent 500, 700, and 800 Series... adding the following new AD: 2011-09-07 Rolls-Royce plc (RR): Amendment 39-16669. Docket No. FAA- 2010... identified in this AD, contact Rolls-Royce plc, P.O. Box 31, Derby, DE24 8BJ, United Kingdom; phone: 011 44...
LEACHING OF URANIUM ORES USING ALKALINE CARBONATES AND BICARBONATES AT ATMOSPHERIC PRESSURE
Thunaes, A.; Brown, E.A.; Rabbits, A.T.; Simard, R.; Herbst, H.J.
1961-07-18
A method of leaching uranium ores containing sulfides is described. The method consists of adding a leach solution containing alkaline carbonate and alkaline bicarbonate to the ore to form a slurry, passing the slurry through a series of agitators, passing an oxygen containing gas through the slurry in the last agitator in the series, passing the same gas enriched with carbon dioxide formed by the decomposition of bicarbonates in the slurry through the penultimate agitator and in the same manner passing the same gas increasingly enriched with carbon dioxide through the other agitators in the series. The conditions of agitation is such that the extraction of the uranium content will be substantially complete before the slurry reaches the last agitator.
Yu, Hwa-Lung; Lin, Yuan-Chien; Kuo, Yi-Ming
2015-09-01
Understanding the temporal dynamics and interactions of particulate matter (PM) concentration and composition is important for air quality control. This paper applied a dynamic factor analysis method (DFA) to reveal the underlying mechanisms of nonstationary variations in twelve ambient concentrations of aerosols and gaseous pollutants, and the associations with meteorological factors. This approach can consider the uncertainties and temporal dependences of time series data. The common trends of the yearlong and three selected diurnal variations were obtained to characterize the dominant processes occurring in general and specific scenarios in Taipei during 2009 (i.e., during Asian dust storm (ADS) events, rainfall, and under normal conditions). The results revealed the two distinct yearlong NOx transformation processes, and demonstrated that traffic emissions and photochemical reactions both critically influence diurnal variation, depending upon meteorological conditions. During an ADS event, transboundary transport and distinct weather conditions both influenced the temporal pattern of identified common trends. This study shows the DFA method can effectively extract meaningful latent processes of time series data and provide insights of the dominant associations and interactions in the complex air pollution processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rivera, Diego; Lillo, Mario; Granda, Stalin
2014-12-01
The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.
ERIC Educational Resources Information Center
Harris, Douglas N.; Ingle, William K.; Rutledge, Stacey A.
2014-01-01
Policymakers are revolutionizing teacher evaluation by attaching greater stakes to student test scores and observation-based teacher effectiveness measures, but relatively little is known about why they often differ so much. Quantitative analysis of thirty schools suggests that teacher value-added measures and informal principal evaluations are…
ERIC Educational Resources Information Center
Nakamura, Yugo
2013-01-01
Value-added models (VAMs) have received considerable attention as a tool to transform our public education system. However, as VAMs are studied by researchers from a broad range of academic disciplines who remain divided over the best methods in analyzing the models and stakeholders without the extensive statistical background have been excluded…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... Airworthiness Directives; British Aerospace Regional Aircraft Model Jetstream Series 3101 and Jetstream Model... available in the AD docket shortly after receipt. FOR FURTHER INFORMATION CONTACT: Taylor Martin, Aerospace... AD docket. Relevant Service Information BAE Systems has issued British Aerospace Jetstream Series...
78 FR 47529 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... (AD) for certain Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440) airplanes. This AD... To Shorten Compliance Time The Airline Pilots Association International stated it supports the NPRM.... (c) Applicability This AD applies to Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100...
ERIC Educational Resources Information Center
Bassiri, Dina
2016-01-01
One outcome of the implementation of No Child Left Behind Act of 2001 and its call for better accountability in public schools across the nation has been the use of student assessment data in measuring schools' effectiveness. In general, inferences about schools' effectiveness depend on the type of statistical model used to link student assessment…
Cullinane Thomas, Catherine; Huber, Christopher; Koontz, Lynne
2015-01-01
New this year, results from the Visitor Spending Effects report series are available online via an interactive tool. Users can explore current year visitor spending, jobs, labor income, value added, and output effects by sector for national, state, and local economies. This interactive tool is available via the NPS Social Science Program webpage at http://www.nature.nps.gov/socialscience/economics.cfm.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Jancso, Leonhardt M.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; Peter, Thomas; Davison, Anthony C.
2010-05-01
In this study we analyze the frequency distribution of extreme events in low and high total ozone (termed ELOs and EHOs) for 5 long-term stations in the northern mid-latitudes in Europe (Belsk, Poland; Hradec Kralove, Czech Republic; Hohenpeissenberg and Potsdam, Germany; and Uccle, Belgium). Further, the influence of these extreme events on annual and seasonal mean values and trends is analysed. The applied method follows the new "ozone extreme concept", which is based on tools from extreme value theory [Coles, 2001; Ribatet, 2007], recently developed by Rieder et al. [2010a, b]. Mathematically seen the decisive feature within the extreme concept is the Generalized Pareto Distribution (GPD). In this analysis, the long-term trends needed to be removed first, differently to the treatment of Rieder et al. [2010a, b], in which the time series of Arosa was analysed, covering many decades of measurements in the anthropogenically undisturbed stratosphere. In contrast to previous studies only focusing on so called ozone mini-holes and mini-highs the "ozone extreme concept" provides a statistical description of the tails in total ozone distributions (i.e. extreme low and high values). It is shown that this concept is not only an appropriate method to describe the frequency and distribution of extreme events, it also provides new information on time series properties and internal variability. Furthermore it allows detection of fingerprints of physical (e.g. El Niño, NAO) and chemical (e.g. polar vortex ozone loss) features in the Earth's atmosphere as well as major volcanic eruptions (e.g. El Chichón, Mt. Pinatubo). It is shown that mean values and trends in total ozone are strongly influenced by extreme events. Trend calculations (for the period 1970-1990) are performed for the entire as well as the extremes-removed time series. The results after excluding extremes show that annual trends are most reduced at Hradec Kralove (about a factor of 3), followed by Potsdam (factor of 2.5), and Hohenpeissenberg and Belsk (both about a factor of 2). In general the reduction of trend is strongest during winter and spring. Throughout all stations the influence of ELOs on observed trends is larger than those of EHOs. Especially from the 1990s on ELOs dominate the picture as only a relatively small fraction of EHOs can be observed in the records (due to strong influence of Mt. Pinatubo eruption and polar vortex ozone loss contributions). Additionally it is evidenced that the number of observed mini-holes can be estimated highly accurate by the GPD-model. Overall the results of this thesis show that extreme events play a major role in total ozone and the "ozone extremes concept" provides deeper insight in the influence of chemical and physical features on column ozone. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD.
Pericak-Vance, M A; Bass, M P; Yamaoka, L H; Gaskell, P C; Scott, W K; Terwedow, H A; Menold, M M; Conneally, P M; Small, G W; Vance, J M; Saunders, A M; Roses, A D; Haines, J L
1997-10-15
Four genetic loci have been identified as contributing to Alzheimer disease (AD), including the amyloid precursor protein gene, the presenilin 1 gene, the presenilin 2 gene, and the apolipoprotein E gene, but do not account for all the genetic risk for AD. To identify additional genetic risk factors for late-onset AD. A complete genomic screen was performed (N=280 markers). Critical values for chromosomal regional follow-up were a P value of .05 or less for affected relative pair analysis or sibpair analysis, a parametric lod score of 1.0 or greater, or both. Regional follow-up included analysis of additional markers and a second data set. Clinic populations in the continental United States. From a series of multiplex families affected with late-onset (> or =60 years) AD ascertained during the last 14 years (National Insititute of Neurological Disorders and Stroke-Alzheimer's Disease and Related Disorders Association diagnostic criteria) and for which DNA has been obtained, a subset of 16 families (135 total family members, 52 of whom were patients with AD) was used for the genomic screen. A second subset of 38 families (216 total family members, 89 of whom were patients with AD) was used for the follow-up analysis. Linkage analysis results generated using both genetic model-dependent (lod score) and model-independent methods. Fifteen chromosomal regions warranted initial follow-up. Follow-up analyses revealed 4 regions of continued interest on chromosomes 4, 6, 12, and 20, with the strongest results observed forchromosome 12. Peak 2-point affecteds-only lod scores (n=54) were 1.3, 1.6, 2.7, and 2.2 and affected relative pairs P values (n=54) were .04, .03, .14, and .04 for D12S373, D12S1057, D12S1042, and D12S390, respectively. Sibpair analysis (n=54) resulted in maximum lod scores (MLSs) of 1.5, 2.6, 3.2, and 2.3 for these markers, with a peak multipoint MLS of 3.5. A priori stratification by APOE genotype identified 27 families that had at least 1 member with AD whose genotype did not contain an APOE*4 allele. Analysis of these 27 families resulted in MLSs of 1.0, 2.4, 3.7, and 3.3 and a peak multipoint MLS of 3.9. A complete genomic screen in families affected with late-onset AD identified 4 regions of interest after follow-up. Chromosome 12 gave the strongest and most consistent results with a peak multipoint MLS of 3.5, suggesting that this region contains a new susceptibility gene for AD. Additional analyses are necessary to identify the chromosome 12 susceptibility gene for AD and to follow up the regions of interest on chromosomes 4, 6, and 20.
NASA Astrophysics Data System (ADS)
Masiokas, M. H.; Villalba, R.; Christie, D. A.; Betman, E.; Luckman, B. H.; Le Quesne, C.; Prieto, M. R.; Mauget, S.
2012-03-01
The Andean snowpack is the main source of freshwater and arguably the single most important natural resource for the populated, semi-arid regions of central Chile and central-western Argentina. However, apart from recent analyses of instrumental snowpack data, very little is known about the long term variability of this key natural resource. Here we present two complementary, annually-resolved reconstructions of winter snow accumulation in the southern Andes between 30°-37°S. The reconstructions cover the past 850 years and were developed using simple regression models based on snowpack proxies with different inherent limitations. Rainfall data from central Chile (very strongly correlated with snow accumulation values in the adjacent mountains) were used to extend a regional 1951-2010 snowpack record back to AD 1866. Subsequently, snow accumulation variations since AD 1150 were inferred from precipitation-sensitive tree-ring width series. The reconstructed snowpack values were validated with independent historical and instrumental information. An innovative time series analysis approach allowed the identification of the onset, duration and statistical significance of the main intra- to multi-decadal patterns in the reconstructions and indicates that variations observed in the last 60 years are not particularly anomalous when assessed in a multi-century context. In addition to providing new information on past variations for a highly relevant hydroclimatic variable in the southern Andes, the snowpack reconstructions can also be used to improve the understanding and modeling of related, larger-scale atmospheric features such as ENSO and the PDO.
Multiscale Poincaré plots for visualizing the structure of heartbeat time series.
Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L
2016-02-09
Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.
On the added value of forensic science and grand innovation challenges for the forensic community.
van Asten, Arian C
2014-03-01
In this paper the insights and results are presented of a long term and ongoing improvement effort within the Netherlands Forensic Institute (NFI) to establish a valuable innovation programme. From the overall perspective of the role and use of forensic science in the criminal justice system, the concepts of Forensic Information Value Added (FIVA) and Forensic Information Value Efficiency (FIVE) are introduced. From these concepts the key factors determining the added value of forensic investigations are discussed; Evidential Value, Relevance, Quality, Speed and Cost. By unravelling the added value of forensic science and combining this with the future needs and scientific and technological developments, six forensic grand challenges are introduced: i) Molecular Photo-fitting; ii) chemical imaging, profiling and age estimation of finger marks; iii) Advancing Forensic Medicine; iv) Objective Forensic Evaluation; v) the Digital Forensic Service Centre and vi) Real time In-Situ Chemical Identification. Finally, models for forensic innovation are presented that could lead to major international breakthroughs on all these six themes within a five year time span. This could cause a step change in the added value of forensic science and would make forensic investigative methods even more valuable than they already are today. © 2013. Published by Elsevier Ireland Ltd on behalf of Forensic Science Society. All rights reserved.
Added value of high-resolution regional climate model over the Bohai Sea and Yellow Sea areas
NASA Astrophysics Data System (ADS)
Li, Delei; von Storch, Hans; Geyer, Beate
2016-04-01
Added value from dynamical downscaling has long been a crucial and debatable issue in regional climate studies. A 34 year (1979-2012) high-resolution (7 km grid) atmospheric hindcast over the Bohai Sea and the Yellow Sea (BYS) has been performed using COSMO-CLM (CCLM) forced by ERA-Interim reanalysis data (ERA-I). The accuracy of CCLM in surface wind reproduction and the added value of dynamical downscaling to ERA-I have been investigated through comparisons with the satellite data (including QuikSCAT Level2B 12.5 km version 3 (L2B12v3) swath data and MODIS images) and in situ observations, with adoption of quantitative metrics and qualitative assessment methods. The results revealed that CCLM has a reliable ability to reproduce the regional wind characteristics over the BYS areas. Over marine areas, added value to ERA-I has been detected in the coastal areas with complex coastlines and orography. CCLM was better able to represent light and moderate winds but has even more added value for strong winds relative to ERA-I. Over land areas, the high-resolution CCLM hindcast can add value to ERA-I in reproducing wind intensities and direction, wind probability distribution and extreme winds mainly at mountain areas. With respect to atmospheric processes, CCLM outperforms ERA-I in resolving detailed temporal and spatial structures for phenomena of a typhoon and of a coastal atmospheric front; CCLM generates some orography related phenomena such as a vortex street which is not captured by ERA-I. These added values demonstrate the utility of the 7-km-resolution CCLM for regional and local climate studies and applications. The simulation was constrained with adoption of spectral nudging method. The results may be different when simulations are considered, which are not constrained by spectral nudging.
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Abrupt recent shift in δ13C and δ15N values in Adélie penguin eggshell in Antarctica
Emslie, Steven D.; Patterson, William P.
2007-01-01
Stable isotope values of carbon (δ13C) and nitrogen (δ15N) in blood, feathers, eggshell, and bone have been used in seabird studies since the 1980s, providing a valuable source of information on diet, foraging patterns, and migratory behavior in these birds. These techniques can also be applied to fossil material when preservation of bone and other tissues is sufficient. Excavations of abandoned Adélie penguin (Pygoscelis adeliae) colonies in Antarctica often provide well preserved remains of bone, feathers, and eggshell dating from hundreds to thousands of years B.P. Herein we present an ≈38,000-year time series of δ13C and δ15N values of Adélie penguin eggshell from abandoned colonies located in three major regions of Antarctica. Results indicate an abrupt shift to lower-trophic prey in penguin diets within the past ≈200 years. We posit that penguins only recently began to rely on krill as a major portion of their diet, in conjunction with the removal of baleen whales and krill-eating seals during the historic whaling era. Our results support the “krill surplus” hypothesis that predicts excess krill availability in the Southern Ocean after this period of exploitation. PMID:17620620
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-18
... Airworthiness Directives; General Electric Company CF6-45 and CF6-50 Series Turbofan Engines AGENCY: Federal... airworthiness directive (AD) for General Electric Company (GE) CF6-45 and CF6-50 series turbofan engines. That..., and MD-10- 30F. The commenter stated that the proposed AD only listed these airplanes as a series. We...
ERIC Educational Resources Information Center
Costello, Ronald W.; Shuey, Barbara
2005-01-01
The Archdiocese of Indianapolis has been using the Sander's value-added model to determine if they have had gains in student achievement in language arts, mathematics and reading. The article summarizes the method to make this determination and the results from three years of testing using the CTB McGraw-Hill Terre Nova test. The archdiocese has…
ERIC Educational Resources Information Center
All, Anissa; Van Looy, Jan; Castellar, Elena Patricia Nuñez
2013-01-01
This study explores the added value of co-design in addition to other innovation research methods in the process of developing a serious game design document for a road safety game. The sessions aimed at exploring 4 aspects of a location-based game experience: themes, game mechanics, mobile phone applications and locations for mini-games. In…
Application of External Axis in Robot-Assisted Thermal Spraying
NASA Astrophysics Data System (ADS)
Deng, Sihao; Fang, Dandan; Cai, Zhenhua; Liao, Hanlin; Montavon, Ghislain
2012-12-01
Currently, industrial robots are widely used in the process of thermal spraying because of their high efficiency, security, and repeatability. Although robots are found suitable for use in industrial productions, they have some natural disadvantages because of their six-axis mechanical linkages. When a robot performs a series of stages of production, it could be hard to move from one to another because a few axes reach their limit value. For this reason, an external axis should be added to the robot system to extend the reachable space of the robots. This article concerns the application of external axis on ABB robots in thermal spraying and the different methods of off-line programming with external axis in the virtual environment. The developed software toolkit was applied to coat real workpiece with a complex geometry in atmospheric plasma spraying).
Microstructural and electrical properties of PVA/PVP polymer blend films doped with cupric sulphate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemalatha, K.; Gowtham, G. K.; Somashekarappa, H., E-mail: drhssappa@gmail.com
2016-05-23
A series of polyvinyl alcohol (PVA)/polyvinyl pyrrolidone (PVP) polymer blends added with different concentrations of cupric sulphate (CuSO{sub 4}) were prepared by solution casting method and were subjected to X-ray diffraction (XRD) and Ac conductance measurements. An attempt has been made to study the changes in crystal imperfection parameters in PVA/PVP blend films with the increase in concentration of CuSO{sub 4}. Results show that decrease in micro crystalline parameter values is accompanied with increase in the amorphous content in the film which is the reason for film to have more flexibility, biodegradability and good ionic conductivity. AC conductance measurements inmore » these films show that the conductivity increases as the concentration of CuSO{sub 4} increases. These films were suitable for electro chemical applications.« less
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
78 FR 79287 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
...We are adopting a new airworthiness directive (AD) for certain Bombardier, Inc. Model CL-600-2C10 (Regional Jet Series 700, 701, & 702), CL-600-2D15 (Regional Jet Series 705), and CL-600-2D24 (Regional Jet Series 900) airplanes. This AD was prompted by a report that traces of oil could be found in the crew oxygen system due to the use of incorrect pressure testing procedures during manufacturing. This AD requires cleaning the crew oxygen system. We are issuing this AD to detect and correct oil contaminants, which could cause an ignition and result in a fire in the oxygen system.
78 FR 17297 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-21
... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... (AD) for all Rolls-Royce plc (RR) RB211 Trent 500 series turbofan engines. That AD currently requires... 9, 2012), for all RR RB211 Trent 500 series turbofan engines. That AD requires a one-time inspection...
78 FR 11976 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-21
... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... (AD) for all Rolls-Royce plc (RR) RB211-524 series turbofan engines. That AD currently requires...-16724 (76 FR 40217, July 8, 2011), for all RR plc RB211-524 series turbofan engines. That AD required...
Added Value of SPECT/CT in the Evaluation of Benign Bone Diseases of the Appendicular Skeleton.
Abikhzer, Gad; Srour, Saher; Keidar, Zohar; Bar-Shalom, Rachel; Kagna, Olga; Israel, Ora; Militianu, Daniela
2016-04-01
Bone scintigraphy is a sensitive technique to detect altered bone mineralization but has limited specificity. The use of SPECT/CT has improved significantly the diagnostic accuracy of bone scintigraphy, in patients with cancer as well as in evaluation of benign bone disease. It provides precise localization and characterization of tracer-avid foci, shortens the diagnostic workup, and decreases patient anxiety. Through both the SPECT and the CT components, SPECT/CT has an incremental value in characterizing benign bone lesions, specifically in the appendicular skeleton, as illustrated by present case series.
Extreme events in total ozone: Spatio-temporal analysis from local to global scale
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.
2010-05-01
Recently tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) have been applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately (Rieder et al., 2010a,b). A case study the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the total ozone record. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances led to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the extremal analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. Findings described above could be proven also for the total ozone records of 5 other long-term series (Belsk, Hohenpeissenberg, Hradec Kralove, Potsdam, Uccle) showing that strong influence of atmospheric dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
Nurse Value-Added and Patient Outcomes in Acute Care
Yakusheva, Olga; Lindrooth, Richard; Weiss, Marianne
2014-01-01
Objective The aims of the study were to (1) estimate the relative nurse effectiveness, or individual nurse value-added (NVA), to patients’ clinical condition change during hospitalization; (2) examine nurse characteristics contributing to NVA; and (3) estimate the contribution of value-added nursing care to patient outcomes. Data Sources/Study Setting Electronic data on 1,203 staff nurses matched with 7,318 adult medical–surgical patients discharged between July 1, 2011 and December 31, 2011 from an urban Magnet-designated, 854-bed teaching hospital. Study Design Retrospective observational longitudinal analysis using a covariate-adjustment value-added model with nurse fixed effects. Data Collection/Extraction Methods Data were extracted from the study hospital's electronic patient records and human resources databases. Principal Findings Nurse effects were jointly significant and explained 7.9 percent of variance in patient clinical condition change during hospitalization. NVA was positively associated with having a baccalaureate degree or higher (0.55, p = .04) and expertise level (0.66, p = .03). NVA contributed to patient outcomes of shorter length of stay and lower costs. Conclusions Nurses differ in their value-added to patient outcomes. The ability to measure individual nurse relative value-added opens the possibility for development of performance metrics, performance-based rankings, and merit-based salary schemes to improve patient outcomes and reduce costs. PMID:25256089
Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L
2015-07-01
A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.
Isotopic Analysis of Uranium in NIST SRM Glass by Femtosecond Laser Ablation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duffin, Andrew M.; Hart, Garret L.; Hanlen, Richard C.
We employed femtosecond Laser Ablation Multicollector Inductively Coupled Mass Spectrometry for the 11 determination of uranium isotope ratios in a series of standard reference material glasses (NIST 610, 612, 614, and 12 616). This uranium concentration in this series of SRM glasses is a combination of isotopically natural uranium in 13 the materials used to make the glass matrix and isotopically depleted uranium added to increase the uranium 14 elemental concentration across the series. Results for NIST 610 are in excellent agreement with literature values. 15 However, other than atom percent 235U, little information is available for the remaining glasses.more » We present atom 16 percent and isotope ratios for 234U, 235U, 236U, and 238U for all four glasses. Our results show deviations from the 17 certificate values for the atom percent 235U, indicating the need for further examination of the uranium isotopes in 18 NIST 610-616. Our results are fully consistent with a two isotopic component mixing between the depleted 19 uranium spike and natural uranium in the bulk glass.« less
77 FR 48420 - Airworthiness Directives; BAE Systems (Operations) Limited Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-14
... 146-RJ series airplanes. This AD was prompted by reports of cracking and surface anomalies of the... responsible for having the actions required by this AD performed within the compliance times specified, unless..., General--Description,'' of Chapter 53, ``Fuselage,'' of the BAE SYSTEMS BAe 146 Series/AVRO 146-RJ Series...
78 FR 53080 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
...-102, -103, -106, -201, -202, -301, -311, and -315 series airplanes. This proposed AD was prompted by a... information identified in this proposed AD, contact Bombardier, Inc., Q-Series Technical Help Desk, 123... Series 100 Temporary Revision MRB-153, dated July 10, 2012, Part 1 Section 2-Systems, of the de Havilland...
76 FR 72130 - Airworthiness Directives; Pratt & Whitney JT9D Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-22
... the Airworthiness Limitations Section (ALS) of the manufacturer's Instructions for Continued... life-limited parts. This proposed AD would require additional revisions to the JT9D series engines ALS... all PW JT9D series turbofan engines. That AD requires revisions to the ALS of the manufacturer's ICA...
78 FR 4051 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-18
...-400D, 747-400F, 747SR, and 747SP series airplanes; and certain Model 757-200, -200PF, and -300 series... Model 757 series airplanes. This new AD adds airplanes to the applicability and revises the initial compliance times for those airplanes. This AD was prompted by reports of problems associated with the...
National survey of drinking and driving attitudes and behavior : 2001. Volume 2, Methods report
DOT National Transportation Integrated Search
2003-06-01
This report represents the sixth in a series of biennial national surveys undertaken by the National Highway Traffic Safety Administration (NHTSA) starting in 1991, and reports data from this sixth administration as well as those of the first five ad...
National survey of drinking and driving attitudes and behavior : 1999. Volume 2, Methods report
DOT National Transportation Integrated Search
2000-12-01
This report represents the fifth in a series of biennial national surveys undertaken by the National Highway Traffic Safety Administration (NHTSA) starting in 1991, and reports data from this fifth administration as well as those of the first four ad...
Lista, Simone; Molinuevo, Jose L; Cavedo, Enrica; Rami, Lorena; Amouyel, Philippe; Teipel, Stefan J; Garaci, Francesco; Toschi, Nicola; Habert, Marie-Odile; Blennow, Kaj; Zetterberg, Henrik; O'Bryant, Sid E; Johnson, Leigh; Galluzzi, Samantha; Bokde, Arun L W; Broich, Karl; Herholz, Karl; Bakardjian, Hovagim; Dubois, Bruno; Jessen, Frank; Carrillo, Maria C; Aisen, Paul S; Hampel, Harald
2015-09-24
There is evolving evidence that individuals categorized with subjective cognitive decline (SCD) are potentially at higher risk for developing objective and progressive cognitive impairment compared to cognitively healthy individuals without apparent subjective complaints. Interestingly, SCD, during advancing preclinical Alzheimer's disease (AD), may denote very early, subtle cognitive decline that cannot be identified using established standardized tests of cognitive performance. The substantial heterogeneity of existing SCD-related research data has led the Subjective Cognitive Decline Initiative (SCD-I) to accomplish an international consensus on the definition of a conceptual research framework on SCD in preclinical AD. In the area of biological markers, the cerebrospinal fluid signature of AD has been reported to be more prevalent in subjects with SCD compared to healthy controls; moreover, there is a pronounced atrophy, as demonstrated by magnetic resonance imaging, and an increased hypometabolism, as revealed by positron emission tomography, in characteristic brain regions affected by AD. In addition, SCD individuals carrying an apolipoprotein ɛ4 allele are more likely to display AD-phenotypic alterations. The urgent requirement to detect and diagnose AD as early as possible has led to the critical examination of the diagnostic power of biological markers, neurophysiology, and neuroimaging methods for AD-related risk and clinical progression in individuals defined with SCD. Observational studies on the predictive value of SCD for developing AD may potentially be of practical value, and an evidence-based, validated, qualified, and fully operationalized concept may inform clinical diagnostic practice and guide earlier designs in future therapy trials.
2011-03-23
prac- tical max impulse to 1mNs. The newly developed Piezo - electric Impact Hammer (PIH) calibration system over- comes geometric limits of ESC...the fins to behave as part of an LRC circuit which results in voltage oscillations. By adding a resistor in series between the pulse generator and...series resistor as well as the effects of no loading on the pulse generator. III. PIEZOELECTRIC IMPACT HAMMER SYSTEM The second calibration method tested
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz
2010-05-01
In this study tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately. The study illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) (Rieder et al., 2010a). A daily moving threshold was implemented for consideration of the seasonal cycle in total ozone. The frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone and the influence of those on mean values and trends is analyzed for Arosa total ozone time series. The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Furthermore, it is shown that the fitted model represents the tails of the total ozone data set with very high accuracy over the entire range (including absolute monthly minima and maxima). Also the frequency distribution of ozone mini-holes (using constant thresholds) can be calculated with high accuracy. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight in time series properties. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
An improved method for testing tension properties of fiber-reinforced polymer rebar
NASA Astrophysics Data System (ADS)
Yuan, Guoqing; Ma, Jian; Dong, Guohua
2010-03-01
We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.
An improved method for testing tension properties of fiber-reinforced polymer rebar
NASA Astrophysics Data System (ADS)
Yuan, Guoqing; Ma, Jian; Dong, Guohua
2009-12-01
We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
NASA Astrophysics Data System (ADS)
Kartashov, E. M.
1986-10-01
Analytical methods for solving boundary value problems for the heat conduction equation with heterogeneous boundary conditions on lines, on a plane, and in space are briefly reviewed. In particular, the method of dual integral equations and summator series is examined with reference to stationary processes. A table of principal solutions to dual integral equations and pair summator series is proposed which presents the known results in a systematic manner. Newly obtained results are presented in addition to the known ones.
Scargill, J J; Reed, P; Kane, J
2013-01-01
Measurement of fractionated plasma or urine metadrenalines is the recommended screening test in the diagnosis of phaeochromocytoma, with clinical cut-offs geared towards diagnostic sensitivity. Current practice at Salford Royal Hospital is to add urine catecholamines onto samples with raised urine metadrenalines, with the aim of adding specificity to a diagnosis of phaeochromocytoma. This practice was reviewed by identifying a series of patients with raised urine metadrenalines who had catecholamines reflectively added. A total of 358 samples were identified from 242 patients, of which 228 had urine catecholamines measured. A diagnosis of 'phaeochromocytoma' (n = 41) or 'no phaeochromocytoma' (n = 90) was obtained in 131 of 228 patients, giving raised urine metadrenalines a positive predictive value for phaeochromocytoma of 31%. The finding of increased urine catecholamines in samples with raised urine metadrenalines increased specificity for phaeochromocytoma to 70%. However, 95% diagnostic specificity for phaeochromocytoma could be achieved by the introduction of a second cut-off for urine metadrenalines geared towards maximizing specificity. Consideration of the degree of increase in urine metadrenalines is a superior method of determining the likelihood of phaeochromocytoma than measurement of urine catecholamines.
NASA Technical Reports Server (NTRS)
Choi, Taeyoung; Xiong, Xiaoxiong; Angal, Amit; Chander, Gyanesh; Qu, John J.
2014-01-01
The objective of this paper is to formulate a methodology to assess the spectral stability of the Libya 4, Libya 1, and Mauritania 2 pseudo-invariant calibration sites (PICS) using Earth Observing One (EO-1) Hyperion sensor. All the available Hyperion collections, downloaded from the Earth Explorer website, were utilized for the three PICS. In each site, a reference spectrum is selected at a specific day in the vicinity of the region of interest (ROI) defined by Committee on Earth Observation Satellites (CEOS). A series of ROIs are predefined in the along-track direction with 196 spectral top-of-atmosphere reflectance values in each ROI. Based on the reference ROI spectrum, the spectral stability of these ROIs is evaluated by average deviations (ADs) and spectral angle mapper (SAM) methods in the specific ranges of time and geo-spatial locations. Time and ROI location-dependent SAM and AD results are very stable within +/- 2 deg and +/-1.7% of 1sigma standard deviations. Consequently, the Libya 4, Mauritania 2, and Libya 1 CEOS selected PICS are spectrally stable targets within the time and spatial swath ranges of the Hyperion collections.
Wireless Sensor Network Quality of Service Improvement on Flooding Attack Condition
NASA Astrophysics Data System (ADS)
Hartono, R.; Widyawan; Wibowo, S. B.; Purnomo, A.; Hartatik
2018-03-01
There are two methods of building communication using wireless media. The first method is building a base infrastructure as an intermediary between users. Problems that arise on this type of network infrastructure is limited space to build any network physical infrastructure and also the cost factor. The second method is to build an ad hoc network between users who will communicate. On ad hoc network, each user must be willing to send data from source to destination for the occurrence of a communication. One of network protocol in Ad Hoc, Ad hoc on demand Distance Vector (AODV), has the smallest overhead value, easier to adapt to dynamic network and has small control message. One AODV protocol’s drawback is route finding process’ security for sending the data. In this research, AODV protocol is optimized by determining Expanding Ring Search (ERS) best value. Random topology is used with variation in the number of nodes: 25, 50, 75, 100, 125 and 150 with node’s speed of 10m/s in the area of 1000m x 1000m on flooding network condition. Parameters measured are Throughput, Packet Delivery Ratio, Average Delay and Normalized Routing Load. From the test results of AODV protocol optimization with best value of Expanding Ring Search (ERS), throughput increased by 5.67%, packet delivery ratio increased by 5.73%, and as for Normalized Routing Load decreased by 4.66%. ERS optimal value for each node’s condition depending on the number of nodes on the network.
NASA Astrophysics Data System (ADS)
Sulistyowati, L.; Pardian, P.; Syamsyiah, N.; Deliana, Y.
2018-03-01
In the national economic development in Indonesia, Small and Medium Enterprises (SMEs) become a priority to be developed, because SMEs can be the backbone of the populist economic system to reduce the problem of poverty. In addition, the development of SMEs is able to expand the economic base and can contribute to the increase of added value, in addition it would also serve to open employment opportunities in rural areas. Indramayu is one of the three mango production centers in West Java that face the problem that there are about 20% of the mangoes that is not worth selling. This opportunity is utilized by women who are members of KUB (Joint Business Group) to be processed into mango dodol at household scale. But this effort has not been widespread, only pioneered by a small portion of women. This study aims toobserve the driving force of women to participate in the processing of mango dodol, and whether the mango processing business to become mango dodol is profitable, also how much added value obtained. This study uses case study method with interview for data collection, participant observation and documentation study. While the data analysis technique using Hayami Value-added Method and descriptive analysis. The results revealed that the factors that affect the women’s participation in the processing of dodol is to increase family income, take advantage of spare time and take advantage of rejected mangoes. The added value obtained in mango dodol processing is Rp.50.600,00 per kilogram of input, with a value-added ratio of 52.8%. For the development of SMEs mangoes Training and socialization are needed for the good dodol processing and hygienic according to SOP (Standard Operational Procedure) from the relevant institutions, innovation in packaging, pioneering business partnerships with stores in the city of Indramayu and surrounding areas, and support financing from banks with an affordable interest rate.
Shahraki, Somaye; Mansouri-Torshizi, Hassan; Sori Nezami, Ziba; Ghahghaei, Arezou; Yaghoubi, Fatemeh; Divsalar, Adeleh; Saboury, Ali-Akbar; H. Shirazi, Farshad
2014-01-01
In depth interaction studies between calf thymus deoxyribonucleic acid (CT-DNA) and a series of four structurally relative palladium(II) complexes [Pd(en)(HB)](NO3)2 (a-d), where en is ethylenediamine and heterocyclic base (HB) is 2,2'-bipyridine (bpy, a); 1,10-phenanthroline (phen, b); dipyridoquinoxaline (dpq, c) and dipyridophenazine (dppz, d) (Figure 1), were performed. These studies have been investigated by utilizing the electronic absorption spectroscopy, fluorescence spectra and ethidium bromide (EBr) displacement and gel filtration techniques. a-d complexes cooperatively bind and denature the DNA at low concentrations. Their concentration at midpoint of transition, L1/2, follows the order a >> b > c > d. Also the g, the number of binding sites per 1000 nucleotides, follows the order a >> b ~ c > d. EBr and Scatchard experiments for a-d complexes suggest efficient intercalative binding affinity to CT-DNA giving the order: d > c > b > a. Several binding and thermodynamic parameters are also described. The biological activity of these cationic and water soluble palladium complexes were tested against chronic myelogenous leukemia cell line, K562. b, c and d complexes show cytotoxic concentration (Cc50) values much lower than cisplatin. PMID:25587317
Smokers' Willingness to Protect Children from Secondhand Smoke
ERIC Educational Resources Information Center
King, Keith A.; Vidourek, Rebecca A.; Creighton, Stephanie; Vogel, Stephanie
2003-01-01
Objectives: To examine the effectiveness of a secondhand smoke media campaign on adult smokers' willingness to protect children from secondhand smoke. Methods: Following a series of community awareness ads, a random sample of 390 adult smokers was surveyed via telephone regarding their perceptions of secondhand smoke. Results: Seeing or hearing…
76 FR 68660 - Airworthiness Directives; Pratt & Whitney Division (PW) PW4000 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-07
... Airworthiness Directives; Pratt & Whitney Division (PW) PW4000 Series Turbofan Engines AGENCY: Federal Aviation... airworthiness directive (AD) for PW4000 series turbofan engines. This proposed AD would require replacing the..., PW4152, PW4156, PW4156A, PW4158, PW4160, PW4460, PW4462, and PW4650 turbofan engines, including models...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... Bombardier-Rotax engines in Europe. Differences Between the Proposed AD and the Service Information Rotax... GmbH Type 912 F, 912 S, and 914 F Series Reciprocating Engines AGENCY: Federal Aviation Administration... and 914 F series reciprocating engines. That AD currently requires initial and repetitive visual...
Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2011-03-01
(18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.
Enhancement in Thermoelectric Properties of TiS2 by Sn Addition
NASA Astrophysics Data System (ADS)
Ramakrishnan, Anbalagan; Raman, Sankar; Chen, Li-Chyong; Chen, Kuei-Hsien
2018-06-01
A series of Sn added TiS2 (TiS2:Sn x ; x = 0, 0.05, 0.075 and 0.1) were prepared by solid state synthesis with subsequent annealing. The Sn atoms interacted with sulfur atoms in TiS2 and formed a trace amount of misfit layer (SnS)1+m(TiS2-δ)n compound with sulfur deficiency. A significant reduction in electrical resistivity with moderate decrease in the Seebeck coefficient was observed in Sn added TiS2. Hence, a maximum power factor of 1.71 mW/m-K2 at 373 K was obtained in TiS2:Sn0.05. In addition, the thermal conductivity was decreased with Sn addition and reached a minimum value of 2.11 W/m-K at 623 K in TiS2:Sn0.075, due to the impurity phase (misfit phase) and defects (excess Ti) scattering. The zT values increased from 0.08 in pristine TiS2 to an optimized value of 0.46 K at 623 K in TiS2:Sn0.05.
Effect of barnacle fouling on ship resistance and powering.
Demirel, Yigit Kemal; Uzun, Dogancan; Zhang, Yansheng; Fang, Ho-Chun; Day, Alexander H; Turan, Osman
2017-11-01
Predictions of added resistance and the effective power of ships were made for varying barnacle fouling conditions. A series of towing tests was carried out using flat plates covered with artificial barnacles. The tests were designed to allow the examination of the effects of barnacle height and percentage coverage on the resistance and effective power of ships. The drag coefficients and roughness function values were evaluated for the flat plates. The roughness effects of the fouling conditions on the ships' frictional resistances were predicted. Added resistance diagrams were then plotted using these predictions, and powering penalties for these ships were calculated using the diagrams generated. The results indicate that the effect of barnacle size is significant, since a 10% coverage of barnacles each 5 mm in height caused a similar level of added power requirements to a 50% coverage of barnacles each 1.25 mm in height.
Wardak, Mirwais; Wong, Koon-Pong; Shao, Weber; Dahlbom, Magnus; Kepe, Vladimir; Satyamurthy, Nagichettiar; Small, Gary W.; Barrio, Jorge R.; Huang, Sung-Cheng
2010-01-01
Head movement during a PET scan (especially, dynamic scan) can affect both the qualitative and quantitative aspects of an image, making it difficult to accurately interpret the results. The primary objective of this study was to develop a retrospective image-based movement correction (MC) method and evaluate its implementation on dynamic [18F]-FDDNP PET images of cognitively intact controls and patients with Alzheimer’s disease (AD). Methods Dynamic [18F]-FDDNP PET images, used for in vivo imaging of beta-amyloid plaques and neurofibrillary tangles, were obtained from 12 AD and 9 age-matched controls. For each study, a transmission scan was first acquired for attenuation correction. An accurate retrospective MC method that corrected for transmission-emission misalignment as well as emission-emission misalignment was applied to all studies. No restriction was assumed for zero movement between the transmission scan and first emission scan. Logan analysis with cerebellum as the reference region was used to estimate various regional distribution volume ratio (DVR) values in the brain before and after MC. Discriminant analysis was used to build a predictive model for group membership, using data with and without MC. Results MC improved the image quality and quantitative values in [18F]-FDDNP PET images. In this subject population, medial temporal (MTL) did not show a significant difference between controls and AD before MC. However, after MC, significant differences in DVR values were seen in frontal, parietal, posterior cingulate (PCG), MTL, lateral temporal (LTL), and global between the two groups (P < 0.05). In controls and AD, the variability of regional DVR values (as measured by the coefficient of variation) decreased on average by >18% after MC. Mean DVR separation between controls and ADs was higher in frontal, MTL, LTL and global after MC. Group classification by discriminant analysis based on [18F]-FDDNP DVR values was markedly improved after MC. Conclusion The streamlined and easy to use MC method presented in this work significantly improves the image quality and the measured tracer kinetics of [18F]-FDDNP PET images. The proposed MC method has the potential to be applied to PET studies on patients having other disorders (e.g., Down syndrome and Parkinson’s disease) and to brain PET scans with other molecular imaging probes. PMID:20080894
Resonance-induced sensitivity enhancement method for conductivity sensors
NASA Technical Reports Server (NTRS)
Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)
2009-01-01
Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-06
...-0674; Directorate Identifier 2009-NE-25-AD] RIN 2120-AA64 Airworthiness Directives; Rolls-Royce plc... airworthiness directive (AD) for Rolls-Royce plc RB211-Trent 800 series turbofan engines. That AD currently... through Friday, except Federal holidays. Fax: (202) 493-2251. Contact Rolls-Royce plc, P.O. Box 31, DERBY...
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
Ultrasonic Evaluation of the Pull-Off Adhesion between Added Repair Layer and a Concrete Substrate
NASA Astrophysics Data System (ADS)
Czarnecki, Slawomir
2017-10-01
This paper concerns the evaluation of the pull-off adhesion between a concrete added repair layer with variable thickness and a concrete substrate, based on parameters assessed using ultrasonic pulse velocity (UPV) method. In construction practice, the experimental determination of pull-off adhesion f b, between added repair layer and a concrete substrate is necessary to assess the quality of repair. This is usually carried out with the use of pull-off method which results in local damage of the added concrete layer in all the testing areas. Bearing this in mind, it is important to describe the method without these disadvantages. The prediction of the pull-off adhesion of the two-layer concrete elements with variable thickness of each layer might be provided by means of UPV method with two-sided access to the investigated element. For this purpose, two-layered cylindrical specimens were obtained by drilling the borehole from a large size specially prepared concrete element. Those two-layer elements were made out of concrete substrate layer and Polymer Cement Concrete (PCC) mortar as an added repair layer. The values of pull-off adhesion f b of the elements were determined before obtaining the samples by using the semi-destructive pull-off method. The ultrasonic wave velocity was determined in samples with variable thickness of each layer and was then compared to theoretical ultrasonic wave velocity predicted for those specimens. The regression curve for the dependence of velocity and pull-off adhesion, determined by the pulloff method, was made. It has been proved that together with an increase of ratio of investigated ultrasonic wave velocity divided by theoretical ultrasonic wave velocity, the pull-off adhesion value f b between added repair layer with variable thickness and a substrate layer also increases.
Partition functions with spin in AdS2 via quasinormal mode methods
Keeler, Cynthia; Lisbão, Pedro; Ng, Gim Seng
2016-10-12
We extend the results of [1], computing one loop partition functions for massive fields with spin half in AdS 2 using the quasinormal mode method proposed by Denef, Hartnoll, and Sachdev [2]. We find the finite representations of SO(2,1) for spin zero and spin half, consisting of a highest weight state |hi and descendants with non-unitary values of h. These finite representations capture the poles and zeroes of the one loop determinants. Together with the asymptotic behavior of the partition functions (which can be easily computed using a large mass heat kernel expansion), these are sufficient to determine the fullmore » answer for the one loop determinants. We also discuss extensions to higher dimensional AdS 2n and higher spins.« less
Numerical Grid Generation and Potential Airfoil Analysis and Design
1988-01-01
Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR
Development of a new British Geologcial Survey(BGS) Map Series: Seabed Geomorphology
NASA Astrophysics Data System (ADS)
Dove, Dayton
2015-04-01
BGS scientists are developing a new offshore map series, Seabed Geomorphology (1:50k), to join the existing 1:250k 'Sea Bed Sediments', 'Quaternary Geology', and 'Solid Geology' map series. The increasing availability of extensive high-resolution swath bathymetry data (e.g. MCA's Civil Hydrography Programme) provides an unprecedented opportunity to characterize the processes which formed, and actively govern the physical seabed environment. Mapping seabed geomorphology is an effective means to describe individual, or groups of features whose form and other physical attributes (e.g. symmetry) may be used to distinguish feature origin. Swath bathymetry also provides added and renewed value to other data types (e.g. grab samples, legacy seismic data). In such cases the geomorphic evidence may be expanded to make inferences on the evolution of seabed features as well as their association with the underlying geology and other environmental variables/events over multiple timescales. Classifying seabed geomorphology is not particularly innovative or groundbreaking. Terrestrial geomorphology is of course a well established field of science, and within the marine environment for example, mapping submarine glacial landforms has probably become the most reliable method to reconstruct the extent and dynamics of past ice-sheets. What is novel here, and we believe useful/necessary for a survey organization, is to standardise the geomorphological classification scheme such that it is applicable to multiple and diverse environments. The classification scheme should be sufficiently detailed and interpretive to be informative, but not so detailed that we over-interpret or become mired in disputed feature designations or definitions. We plan to present the maps at 1:50k scale with the intention that these maps will be 'enabling' resources for research, educational, commercial, and policy purposes, much like the existing 1:250k map series. We welcome feedback on the structure and content of the proposed classification scheme, as well as the anticipated value to respective user communities.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
75 FR 61114 - Airworthiness Directives; Rolls-Royce plc RB211-Trent 800 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-04
... Airworthiness Directives; Rolls-Royce plc RB211-Trent 800 Series Turbofan Engines AGENCY: Federal Aviation.... Fax: (202) 493-2251. Contact Rolls-Royce plc, P.O. Box 31, Derby, England, DE248BJ; telephone: 011-44... proposed AD, for Rolls- Royce plc RB211-Trent 800 series turbofan engines. That proposed AD would have...
75 FR 55459 - Airworthiness Directives; Pratt & Whitney (PW) PW4000 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-13
... Airworthiness Directives; Pratt & Whitney (PW) PW4000 Series Turbofan Engines AGENCY: Federal Aviation..., PW4152, PW4156A, PW4158, PW4164, PW4168, PW4168A, PW4460, and PW4462 turbofan engines. This AD requires... series turbofan engines. We published the proposed AD in the Federal Register on March 25, 2010 (75 FR...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
... Airworthiness Directives; Rolls-Royce plc RB211-Trent 500, 700, and 800 Series Turbofan Engines AGENCY: Federal... the final stages of approach. The investigation of the incident has established that, under certain...), with a proposed AD. The proposed AD applies to Rolls-Royce plc RB211-Trent 500, 700, and 800 series...
Support vector machines for TEC seismo-ionospheric anomalies detection
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-02-01
Using time series prediction methods, it is possible to pursue the behaviors of earthquake precursors in the future and to announce early warnings when the differences between the predicted value and the observed value exceed the predefined threshold value. Support Vector Machines (SVMs) are widely used due to their many advantages for classification and regression tasks. This study is concerned with investigating the Total Electron Content (TEC) time series by using a SVM to detect seismo-ionospheric anomalous variations induced by the three powerful earthquakes of Tohoku (11 March 2011), Haiti (12 January 2010) and Samoa (29 September 2009). The duration of TEC time series dataset is 49, 46 and 71 days, for Tohoku, Haiti and Samoa earthquakes, respectively, with each at time resolution of 2 h. In the case of Tohoku earthquake, the results show that the difference between the predicted value obtained from the SVM method and the observed value reaches the maximum value (i.e., 129.31 TECU) at earthquake time in a period of high geomagnetic activities. The SVM method detected a considerable number of anomalous occurrences 1 and 2 days prior to the Haiti earthquake and also 1 and 5 days before the Samoa earthquake in a period of low geomagnetic activities. In order to show that the method is acting sensibly with regard to the results extracted during nonevent and event TEC data, i.e., to perform some null-hypothesis tests in which the methods would also be calibrated, the same period of data from the previous year of the Samoa earthquake date has been taken into the account. Further to this, in this study, the detected TEC anomalies using the SVM method were compared to the previous results (Akhoondzadeh and Saradjian, 2011; Akhoondzadeh, 2012) obtained from the mean, median, wavelet and Kalman filter methods. The SVM detected anomalies are similar to those detected using the previous methods. It can be concluded that SVM can be a suitable learning method to detect the novelty changes of a nonlinear time series such as variations of earthquake precursors.
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Gonzalo, Jed D; Graaf, Deanna; Ahluwalia, Amarpreet; Wolpaw, Dan R; Thompson, Britta M
2018-03-21
After emphasizing biomedical and clinical sciences for over a century, US medical schools are expanding experiential roles that allow students to learn about health care delivery while also adding value to patient care. After developing a program where all 1st-year medical students are integrated into interprofessional care teams to contribute to patient care, authors use a diffusion of innovations framework to explore and identify barriers, facilitators, and best practices for implementing value-added clinical systems learning roles. In 2016, authors conducted 32 clinical-site observations, 29 1:1 interviews with mentors, and four student focus-group interviews. Data were transcribed verbatim, and a thematic analysis was used to identify themes. Authors discussed drafts of the categorization scheme, and agreed upon results and quotations. Of 36 sites implementing the program, 17 (47%) remained, 8 (22%) significantly modified, and 11 (31%) withdrew from the program. Identified strategies for implementing value-added roles included: student education, patient characteristics, patient selection methods, activities performed, and resources. Six themes influencing program implementation and maintenance included: (1) educational benefit, (2) value added to patient care from student work, (3) mentor time and site capacity, (4) student engagement, (5) working relationship between school, site, and students, and, (6) students' continuity at the site. Health systems science is an emerging focus for medical schools, and educators are challenged to design practice-based roles that enhance education and add value to patient care. Health professions' schools implementing value-added roles will need to invest resources and strategize about best-practice strategies to guide efforts.
Lean manufacturing analysis to reduce waste on production process of fan products
NASA Astrophysics Data System (ADS)
Siregar, I.; Nasution, A. A.; Andayani, U.; Sari, R. M.; Syahputri, K.; Anizar
2018-02-01
This research is based on case study that being on electrical company. One of the products that will be researched is the fan, which when running the production process there is a time that is not value-added, among others, the removal of material which is not efficient in the raw materials and component molding fan. This study aims to reduce waste or non-value added activities and shorten the total lead time by using the tools Value Stream Mapping. Lean manufacturing methods used to analyze and reduce the non-value added activities, namely the value stream mapping analysis tools, process mapping activity with 5W1H, and tools 5 whys. Based on the research note that no value-added activities in the production process of a fan of 647.94 minutes of total lead time of 725.68 minutes. Process cycle efficiency in the production process indicates that the fan is still very low at 11%. While estimates of the repair showed a decrease in total lead time became 340.9 minutes and the process cycle efficiency is greater by 24%, which indicates that the production process has been better.
2016-01-01
Multivariate calibration (MVC) and near-infrared (NIR) spectroscopy have demonstrated potential for rapid analysis of melamine in various dairy products. However, the practical application of ordinary MVC can be largely restricted because the prediction of a new sample from an uncalibrated batch would be subject to a significant bias due to matrix effect. In this study, the feasibility of using NIR spectroscopy and the standard addition (SA) net analyte signal (NAS) method (SANAS) for rapid quantification of melamine in different brands/types of milk powders was investigated. In SANAS, the NAS vector of melamine in an unknown sample as well as in a series of samples added with melamine standards was calculated and then the Euclidean norms of series standards were used to build a straightforward univariate regression model. The analysis results of 10 different brands/types of milk powders with melamine levels 0~0.12% (w/w) indicate that SANAS obtained accurate results with the root mean squared error of prediction (RMSEP) values ranging from 0.0012 to 0.0029. An additional advantage of NAS is to visualize and control the possible unwanted variations during standard addition. The proposed method will provide a practically useful tool for rapid and nondestructive quantification of melamine in different brands/types of milk powders. PMID:27525154
Coutinho, Artur M.N.; Porto, Fábio H.G.; Zampieri, Poliana F.; Otaduy, Maria C.; Perroco, Tíbor R.; Oliveira, Maira O.; Nunes, Rafael F.; Pinheiro, Toulouse Leusin; Bottino, Cassio M.C.; Leite, Claudia C.; Buchpiguel, Carlos A.
2015-01-01
Reduction of regional brain glucose metabolism (rBGM) measured by [18F]FDG-PET in the posterior cingulate cortex (PCC) has been associated with a higher conversion rate from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Magnetic Resonance Spectroscopy (MRS) is a potential biomarker that has disclosed Naa/mI reductions within the PCC in both MCI and AD. Studies investigating the relationships between the two modalities are scarce. Objective To evaluate differences and possible correlations between the findings of rBGM and NAA/mI in the PCC of individuals with AD, MCI and of cognitively normal volunteers. Methods Patients diagnosed with AD (N=32) or MCI (N=27) and cognitively normal older adults (CG, N=28), were submitted to [18F]FDG-PET and MRS to analyze the PCC. The two methods were compared and possible correlations between the modalities were investigated. Results The AD group exhibited rBGM reduction in the PCC when compared to the CG but not in the MCI group. MRS revealed lower NAA/mI values in the AD group compared to the CG but not in the MCI group. A positive correlation between rBGM and NAA/mI in the PCC was found. NAA/mI reduction in the PCC differentiated AD patients from control subjects with an area under the ROC curve of 0.70, while [18F]FDG-PET yielded a value of 0.93. Conclusion rBGM and Naa/mI in the PCC were positively correlated in patients with MCI and AD. [18F]FDG-PET had greater accuracy than MRS for discriminating AD patients from controls. PMID:29213988
Suss, Samuel; Bhuiyan, Nadia; Demirli, Kudret; Batist, Gerald
2017-06-01
Outpatient cancer treatment centers can be considered as complex systems in which several types of medical professionals and administrative staff must coordinate their work to achieve the overall goals of providing quality patient care within budgetary constraints. In this article, we use analytical methods that have been successfully employed for other complex systems to show how a clinic can simultaneously reduce patient waiting times and non-value added staff work in a process that has a series of steps, more than one of which involves a scarce resource. The article describes the system model and the key elements in the operation that lead to staff rework and patient queuing. We propose solutions to the problems and provide a framework to evaluate clinic performance. At the time of this report, the proposals are in the process of implementation at a cancer treatment clinic in a major metropolitan hospital in Montreal, Canada.
Refined composite multiscale weighted-permutation entropy of financial time series
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian
2018-04-01
For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.
NASA Astrophysics Data System (ADS)
Fink, G.; Koch, M.
2010-12-01
An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.
On demand processing of climate station sensor data
NASA Astrophysics Data System (ADS)
Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas
2015-04-01
Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution (days, months, years). Therefore, the interval of time aggregation is selectable for the processing. For some use cases it is desirable that the resulting time series are as continuous as possible. To meet these requirements, the processing system includes techniques to fill gaps of missing values by interpolating measurement values with data from adjacent stations using available contemporaneous measurements from the respective stations as training datasets. Alongside processing of sensor values, we created interactive visualization techniques to get a quick overview of a big amount of archived time series data.
Solving ODE Initial Value Problems With Implicit Taylor Series Methods
NASA Technical Reports Server (NTRS)
Scott, James R.
2000-01-01
In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.
NASA Astrophysics Data System (ADS)
Yang, Peng; Xia, Jun; Zhan, Chesheng; Zhang, Yongyong; Hu, Sheng
2018-04-01
In this study, the temporal variations of the standard precipitation index (SPI) were analyzed at different scales in Northwest China (NWC). Discrete wavelet transform (DWT) was used in conjunction with the Mann-Kendall (MK) test in this study. This study also investigated the relationships between original precipitation and different periodic components of SPI series with datasets spanning 55 years (1960-2014). The results showed that with the exception of the annual and summer SPI in the Inner Mongolia Inland Rivers Basin (IMIRB), spring SPI in the Qinghai Lake Rivers Basin (QLRB), and spring SPI in the Central Asia Rivers Basin (CARB), it had an increasing trend in other regions for other time series. In the spring, summer, and autumn series, though the MK trends test in most areas was at the insignificant level, they showed an increasing trend in precipitation. Meanwhile, the SPI series in most subbasins of NWC displayed a turning point in 1980-1990, with the significant increasing levels after 2000. Additionally, there was a significant difference between the trend of the original SPI series and the largest approximations. The annual and seasonal SPI series were composed of the short periodicities, which were less than a decade. The MK value would increase by adding the multiple D components (and approximations), and the MK value of the combined series was in harmony with that of the original series. Additionally, the major trend of the annual SPI in NWC was based on the four kinds of climate indices (e.g., Atlantic Oscillation [AO], North Atlantic Oscillation [NAO], Pacific Decadal Oscillation [PDO], and El Nino-Southern Oscillation index [ENSO/NINO]), especially the ENSO.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-05
...-0993; Directorate Identifier 2010-NE-08-AD] RIN 2120-AA64 Airworthiness Directives; Rolls-Royce plc... Federal holidays. Fax: (202) 493-2251. Contact Rolls-Royce plc, P.O. Box 31, Derby, DE24 8BJ, United... examining the MCAI in the AD docket. Relevant Service Information Rolls-Royce plc has issued Alert Service...
NASA Astrophysics Data System (ADS)
Chakrabarti, R.; Yogesh, V.
2018-01-01
We study the nonclassicality of the evolution of a superposition of an arbitrary number of photon-added squeezed coherent Schrödinger cat states in a nonlinear Kerr medium. The nonlinearity of the medium gives rise to the periodicities of the quantities such as the Wehrl entropy SQ and the negativity δW of the W-distribution, and a series of local minima of these quantities arise at the rational submultiples of the said period. At these local minima the evolving state coincides with the transient Yurke-Stoler type of photon-added squeezed kitten states, which, for the choice of the phase space variables reflecting their macroscopic nature, show extremely short-lived behavior. Proceeding further we provide the closed form tomograms, which furnish the alternate description of these short-lived states. The increasing complexity in the kitten formations induces more number of interference terms that trigger more quantumness of the corresponding states. The nonclassical depth of the photon-added squeezed kitten states are observed to be of maximum possible value. Employing the Lindblad master equation approach we study the amplitude and the phase damping models for the initial state considered here. In the phase damping model the nonclassicality is not completely erased even in the long time limit when the dynamical quantities, such as the negativity δW and the tomogram, assume nontrivial asymptotic values.
Lactic acid production with undefined mixed culture fermentation of potato peel waste.
Liang, Shaobo; McDonald, Armando G; Coats, Erik R
2014-11-01
Potato peel waste (PPW) as zero value byproduct generated from food processing plant contains a large quantity of starch, non-starch polysaccharide, lignin, protein, and lipid. PPW as one promising carbon source can be managed and utilized to value added bioproducts through a simple fermentation process using undefined mixed cultures inoculated from wastewater treatment plant sludge. A series of non-pH controlled batch fermentations under different conditions such as pretreatment process, enzymatic hydrolysis, temperature, and solids loading were studied. Lactic acid (LA) was the major product, followed by acetic acid (AA) and ethanol under fermentation conditions without the presence of added hydrolytic enzymes. The maximum yields of LA, AA, and ethanol were respectively, 0.22 g g(-1), 0.06 g g(-1), and 0.05 g g(-1). The highest LA concentration of 14.7 g L(-1) was obtained from a bioreactor with initial solids loading of 60 g L(-1) at 35°C. Copyright © 2014 Elsevier Ltd. All rights reserved.
Influence of propane additives on the detonation characteristics of H2-air mixtures
NASA Astrophysics Data System (ADS)
Cheng, Guanbing; Bauer, Pascal; Zitoun, Ratiba
2014-03-01
Hydrogen is more and more considered as a potential fuel for propulsion applications. However, due to its low ignition energy and wide flammability limits, H2-air mixtures raise a concern in terms of safety. This aspect can be partly solved by adding an alkane to these mixtures, which plays the role of an inhibitor. The present paper provides data on such binary fuel-air mixtures where various amounts of propane are added to hydrogen. The behavior of the corresponding mixtures, in terms of detonation characteristics and other fundamental properties, such as the cell size of the detonation front and induction delay, are presented and discussed for a series of equivalence ratios and propane addition. The experimental detonation velocity is in good agreement with calculated theoretical Chapman-Jouguet values. Based on soot tracks records, the cell size λ is measured, whereas the induction length L i is derived from data using a GRI-Mech kinetic mechanism. These data allow providing a value of the coefficient K = λ/L i .
De Bhowmick, Goldy; Sarmah, Ajit K; Sen, Ramkrishna
2018-01-01
A constant shift of society's dependence from petroleum-based energy resources towards renewable biomass-based has been the key to tackle the greenhouse gas emissions. Effective use of biomass feedstock, particularly lignocellulosic, has gained worldwide attention lately. Lignocellulosic biomass as a potent bioresource, however, cannot be a sustainable alternative if the production cost is too high and/ or the availability is limited. Recycling the lignocellulosic biomass from various sources into value added products such as bio-oil, biochar or other biobased chemicals in a bio-refinery model is a sensible idea. Combination of integrated conversion techniques along with process integration is suggested as a sustainable approach. Introducing 'series concept' accompanying intermittent dark/photo fermentation with co-cultivation of microalgae is conceptualised. While the cost of downstream processing for a single type of feedstock would be high, combining different feedstocks and integrating them in a bio-refinery model would lessen the production cost and reduce CO 2 emission. Copyright © 2017 Elsevier Ltd. All rights reserved.
Economic implications of current systems
NASA Technical Reports Server (NTRS)
Daniel, R. E.; Aster, R. W.
1983-01-01
The primary goals of this study are to estimate the value of R&D to photovoltaic (PV) metallization systems cost, and to provide a method for selecting an optimal metallization method for any given PV system. The value-added cost and relative electrical performance of 25 state-of-the-art (SOA) and advanced metallization system techniques are compared.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
METHOD FOR DISSOLVING ZIRCONIUM-URANIUM COMPOSITIONS
Gens, T.A.
1961-07-18
A method is descrioed for treating a zirconium-- uranium composition to form a stable solution from which uranium and other values may be extracted by contacting the composition with at least a 4 molar aqueous solution of ammonium fluoride at a temperature of about 100 deg C, adding a peroxide, in incremental amounts, to the heated solution throughout the period of dissolution until all of the uranium is converted to soluble uranyl salt, adding nitric acid to the resultant solution to form a solvent extraction feed solution to convert the uranyl salt to a solvent extractable state, and thereafter recovering the uranium and other desired values from the feed solution by solvent extraction.
75 FR 50877 - Airworthiness Directives; Rolls-Royce plc RB211-524C2 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
... Airworthiness Directives; Rolls-Royce plc RB211-524C2 Series Turbofan Engines AGENCY: Federal Aviation.... The FAA amends Sec. 39.13 by adding the following new AD: 2010-17-13 Rolls-Royce plc (Formerly Rolls...) None. Applicability (c) This AD applies to Rolls-Royce plc (RR) model RB211-524C2-19 and RB211-524C2-B...
The value of vital sign trends for detecting clinical deterioration on the wards
Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P
2016-01-01
Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412
Shang, Songhao
2012-01-01
Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572
A method of semi-quantifying β-AP in brain PET-CT 11C-PiB images.
Jiang, Jiehui; Lin, Xiaoman; Wen, Junlin; Huang, Zhemin; Yan, Zhuangzhi
2014-01-01
Alzheimer's disease (AD) is a common health problem for elderly populations. Positron emission tomography-computed tomography (PET-CT)11C-PiB for beta-P (amyloid-β peptide, β-AP) imaging is an advanced method to diagnose AD in early stage. However, in practice radiologists lack a standardized value to semi-quantify β-AP. This paper proposes such a standardized value: SVβ-AP. This standardized value measures the mean ratio between the dimension of β-AP areas in PET and CT images. A computer aided diagnosis approach is also proposed to achieve SVβ-AP. A simulation experiment was carried out to pre-test the technical feasibility of the CAD approach and SVβ-AP. The experiment results showed that it is technically feasible.
Improved methods of estimating critical indices via fractional calculus
NASA Astrophysics Data System (ADS)
Bandyopadhyay, S. K.; Bhattacharyya, K.
2002-05-01
Efficiencies of certain methods for the determination of critical indices from power-series expansions are shown to be considerably improved by a suitable implementation of fractional differentiation. In the context of the ratio method (RM), kinship of the modified strategy with the ad hoc `shifted' RM is established and the advantages are demonstrated. Further, in the course of the estimation of critical points, significant betterment of convergence properties of diagonal Padé approximants is observed on several occasions by invoking this concept. Test calculations are performed on (i) various Ising spin-1/2 lattice models for susceptibility series attended with a ferromagnetic phase transition, (ii) complex model situations involving confluent and antiferromagnetic singularities and (iii) the chain-generating functions for self-avoiding walks on triangular, square and simple cubic lattices.
NASA Astrophysics Data System (ADS)
Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles
2008-01-01
The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
The assessment of road infrastructure exposure to extreme weather events is of major importance for scientists and practitioners alike. In this study, we compare the different extreme value approaches and fitting methods with respect to their value for assessing the exposure of transport networks to extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series (PDS) over the standardly used annual maxima series (AMS) in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing PDS) being superior to the block maxima approach (employing AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was visible from neither the square-root criterion nor standardly used graphical diagnosis (mean residual life plot) but rather from a direct comparison of AMS and PDS in combined quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best-suited approach. This will make the analyses more robust, not only in cases where threshold selection and dependency introduces biases to the PDS approach but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend the use of conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of the study directly address road and traffic management but can be transferred to a range of other environmental variables including meteorological and hydrological quantities.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
[email protected]ge.com . You may review copies of the referenced service information at the FAA, Engine..., February 26, 2009), for GE CF6-45 and CF6-50 series turbofan engines. That AD requires replacing LFCEN... that this proposed AD would affect 383 GE CF6-45 and CF6-50 series turbofan engines installed on...
Mihailovic, D T; Udovičić, V; Krmar, M; Arsenić, I
2014-02-01
We have suggested a complexity measure based method for studying the dependence of measured (222)Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of (222)Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air. © 2013 Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Knowledge-based decision support for Space Station assembly sequence planning
NASA Astrophysics Data System (ADS)
1991-04-01
A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.
Knowledge-based decision support for Space Station assembly sequence planning
NASA Technical Reports Server (NTRS)
1991-01-01
A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.
Re-engineering the mission life cycle with ABC and IDEF
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Rackley, Michael; Karlin, Jay
1994-01-01
The theory behind re-engineering a business process is to remove the non-value added activities thereby lowering the process cost. In order to achieve this, one must be able to identify where the non-value added elements are located which is not a trivial task. This is because the non-value added elements are often hidden in the form of overhead and/or pooled resources. In order to be able to isolate these non-value added processes from among the other processes, one must first decompose the overall top level process into lower layers of sub-processes. In addition, costing data must be assigned to each sub-process along with the value the sub-process adds towards the final product. IDEF0 is a Federal Information Processing Standard (FIPS) process-modeling tool that allows for this functional decomposition through structured analysis. In addition, it illustrates the relationship of the process and the value added to the product or service. The value added portion is further defined in IDEF1X which is an entity relationship diagramming tool. The entity relationship model is the blueprint of the product as it moves along the 'assembly line' and therefore relates all of the parts to each other and the final product. It also relates the parts to the tools that produce the product and all of the paper work that is used in their acquisition. The use of IDEF therefore facilitates the use of Activity Based Costing (ABC). ABC is an essential method in a high variety, product-customizing environment, to facilitate rapid response to externally caused change. This paper describes the work being done in the Mission Operations Division to re-engineer the development and operation life cycle of Mission Operations Centers using these tools.
Functional decline in the elderly with MCI: Cultural adaptation of the ADCS-ADL scale.
Cintra, Fabiana Carla Matos da Cunha; Cintra, Marco Túlio Gualberto; Nicolato, Rodrigo; Bertola, Laiss; Ávila, Rafaela Teixeira; Malloy-Diniz, Leandro Fernandes; Moraes, Edgar Nunes; Bicalho, Maria Aparecida Camargos
2017-07-01
Translate, transcultural adaptation and application to Brazilian Portuguese of the Alzheimer's Disease Cooperative Study - Activities of Daily Living (ADCS-ADL) scale as a cognitive screening instrument. We applied the back translation added with pretest and bilingual methods. The sample was composed by 95 elderly individuals and their caregivers. Thirty-two (32) participants were diagnosed as mild cognitive impairment (MCI) patients, 33 as Alzheimer's disease (AD) patients and 30 were considered as cognitively normal individuals. There were only little changes on the scale. The Cronbach alpha coefficient was 0.89. The scores were 72.9 for control group, followed by MCI (65.1) and by AD (55.9), with a p-value < 0.001. The ROC curve value was 0.89. We considered a cut point of 72 and we observed a sensibility of 86.2%, specificity of 70%, positive predictive value of 86.2%, negative predictive value of 70%, positive likelihood ratio of 2.9 and negative likelihood ratio of 0.2. ADCS-ADL scale presents satisfactory psychometric properties to discriminate between MCI, AD and normal cognition.
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Unsteady load on an oscillating Kaplan turbine runner
NASA Astrophysics Data System (ADS)
Puolakka, O.; Keto-Tokoi, J.; Matusiak, J.
2013-02-01
A Kaplan turbine runner oscillating in turbine waterways is subjected to a varying hydrodynamic load. Numerical simulation of the related unsteady flow is time-consuming and research is very limited. In this study, a simplified method based on unsteady airfoil theory is presented for evaluation of the unsteady load for vibration analyses of the turbine shaft line. The runner is assumed to oscillate as a rigid body in spin and axial heave, and the reaction force is resolved into added masses and dampings. The method is applied on three Kaplan runners at nominal operating conditions. Estimates for added masses and dampings are considered to be of a magnitude significant for shaft line vibration. Moderate variation in the added masses and minor variation in the added dampings is found in the frequency range of interest. Reference results for added masses are derived by solving the boundary value problem for small motions of inviscid fluid using the finite element method. Good correspondence is found in the added mass estimates of the two methods. The unsteady airfoil method is considered accurate enough for design purposes. Experimental results are needed for validation of unsteady load analyses.
Novel platinum black electroplating technique improving mechanical stability.
Kim, Raeyoung; Nam, Yoonkey
2013-01-01
Platinum black microelectrodes are widely used as an effective neural signal recording sensor. The simple fabrication process, high quality signal recording and proper biocompatibility are the main advantages of platinum black microelectrodes. When microelectrodes are exposed to actual biological system, various physical stimuli are applied. However, the porous structure of platinum black is vulnerable to external stimuli and destroyed easily. The impedance level of the microelectrode increases when the microelectrodes are damaged resulting in decreased recording performance. In this study, we developed mechanically stable platinum black microelectrodes by adding polydopamine. The polydopamine layer was added between the platinum black structures by electrodeposition method. The initial impedance level of platinum black only microelectrodes and polydopamine added microelectrodes were similar but after applying ultrasonication the impedance value dramatically increased for platinum black only microelectrodes, whereas polydopamine added microelectrodes showed little increase which were nearly retained initial values. Polydopamine added platinum black microelectrodes are expected to extend the availability as neural sensors.
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Huang, Lu; Wen, Xin; Wang, Yan; Zou, Yongde; Ma, Baohua; Liao, Xindi; Liang, Juanboo; Wu, Yinbao
2014-10-01
Effects of antibiotic residues on methane production in anaerobic digestion are commonly studied using the following two antibiotic addition methods: (1) adding manure from animals that consume a diet containing antibiotics, and (2) adding antibiotic-free animal manure spiked with antibiotics. This study used chlortetracycline (CTC) as a model antibiotic to examine the effects of the antibiotic addition method on methane production in anaerobic digestion under two different swine wastewater concentrations (0.55 and 0.22mg CTC/g dry manure). The results showed that CTC degradation rate in which manure was directly added at 0.55mg CTC/g (HSPIKE treatment) was lower than the control values and the rest of the treatment groups. Methane production from the HSPIKE treatment was reduced (p<0.05) by 12% during the whole experimental period and 15% during the first 7days. The treatments had no significant effect on the pH and chemical oxygen demand value of the digesters, and the total nitrogen of the 0.55mg CTC/kg manure collected from mediated swine was significantly higher than the other values. Therefore, different methane production under different antibiotic addition methods might be explained by the microbial activity and the concentrations of antibiotic intermediate products and metabolites. Because the primary entry route of veterinary antibiotics into an anaerobic digester is by contaminated animal manure, the most appropriate method for studying antibiotic residue effects on methane production may be using manure from animals that are given a particular antibiotic, rather than adding the antibiotic directly to the anaerobic digester. Copyright © 2014. Published by Elsevier B.V.
Age and diagnostic performance of Alzheimer disease CSF biomarkers
Rosén, E.; Hansson, O.; Andreasen, N.; Parnetti, L.; Jonsson, M.; Herukka, S.-K.; van der Flier, W.M.; Blankenstein, M.A.; Ewers, M.; Rich, K.; Kaiser, E.; Verbeek, M.M.; Olde Rikkert, M.; Tsolaki, M.; Mulugeta, E.; Aarsland, D.; Visser, P.J.; Schröder, J.; Marcusson, J.; de Leon, M.; Hampel, H.; Scheltens, P.; Wallin, A.; Eriksdotter-Jönhagen, M.; Minthon, L.; Winblad, B.; Blennow, K.; Zetterberg, H.
2012-01-01
Objectives: Core CSF changes in Alzheimer disease (AD) are decreased amyloid β1–42, increased total tau, and increased phospho-tau, probably indicating amyloid plaque accumulation, axonal degeneration, and tangle pathology, respectively. These biomarkers identify AD already at the predementia stage, but their diagnostic performance might be affected by age-dependent increase of AD-type brain pathology in cognitively unaffected elderly. Methods: We investigated effects of age on the diagnostic performance of CSF biomarkers in a uniquely large multicenter study population, including a cross-sectional cohort of 529 patients with AD dementia (median age 71, range 43–89 years) and 304 controls (67, 44–91 years), and a longitudinal cohort of 750 subjects without dementia with mild cognitive impairment (69, 43–89 years) followed for at least 2 years, or until dementia diagnosis. Results: The specificities for subjects without AD and the areas under the receiver operating characteristics curves decreased with age. However, the positive predictive value for a combination of biomarkers remained stable, while the negative predictive value decreased only slightly in old subjects, as an effect of the high AD prevalence in older ages. Conclusion: Although the diagnostic accuracies for AD decreased with age, the predictive values for a combination of biomarkers remained essentially stable. The findings highlight biomarker variability across ages, but support the use of CSF biomarkers for AD even in older populations. PMID:22302554
NASA Astrophysics Data System (ADS)
Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang
2018-04-01
Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.
Schievano, Andrea; D'Imporzano, Giuliana; Salati, Silvia; Adani, Fabrizio
2011-09-01
The mass balance (input/output mass flows) of full-scale anaerobic digestion (AD) processes should be known for a series of purposes, e.g. to understand carbon and nutrients balances, to evaluate the contribution of AD processes to elemental cycles, especially when digestates are applied to agricultural land and to measure the biodegradation yields and the process efficiency. In this paper, three alternative methods were studied, to determine the mass balance in full-scale processes, discussing their reliability and applicability. Through a 1-year survey on three full-scale AD plants and through 38 laboratory-scale batch digesters, the congruency of the considered methods was demonstrated and a linear equation was provided that allows calculating the wet weight losses (WL) from the methane produced (MP) by the plant (WL=41.949*MP+20.853, R(2)=0.950, p<0.01). Additionally, this new tool was used to calculate carbon, nitrogen, phosphorous and potassium balances of the three observed AD plants. Copyright © 2011 Elsevier Ltd. All rights reserved.
Recent trends in the intrinsic water-use efficiency of ringless rainforest trees in Borneo.
Loader, N J; Walsh, R P D; Robertson, I; Bidin, K; Ong, R C; Reynolds, G; McCarroll, D; Gagen, M; Young, G H F
2011-11-27
Stable carbon isotope (δ(13)C) series were developed from analysis of sequential radial wood increments from AD 1850 to AD 2009 for four mature primary rainforest trees from the Danum and Imbak areas of Sabah, Malaysia. The aseasonal equatorial climate meant that conventional dendrochronology was not possible as the tree species investigated do not exhibit clear annual rings or dateable growth bands. Chronology was established using radiocarbon dating to model age-growth relationships and date the carbon isotopic series from which the intrinsic water-use efficiency (IWUE) was calculated. The two Eusideroxylon zwageri trees from Imbak yielded ages of their pith/central wood (±1 sigma) of 670 ± 40 and 759 ± 40 years old; the less dense Shorea johorensis and Shorea superba trees at Danum yielded ages of 240 ± 40 and 330 ± 40 years, respectively. All trees studied exhibit an increase in the IWUE since AD 1960. This reflects, in part, a response of the forest to increasing atmospheric carbon dioxide concentration. Unlike studies of some northern European trees, no clear plateau in this response was observed. A change in the IWUE implies an associated modification of the local carbon and/or hydrological cycles. To resolve these uncertainties, a shift in emphasis away from high-resolution studies towards long, well-replicated time series is proposed to develop the environmental data essential for model evaluation. Identification of old (greater than 700 years) ringless trees demonstrates their potential in assessing the impacts of climatic and atmospheric change. It also shows the scientific and applied value of a conservation policy that ensures the survival of primary forest containing particularly old trees (as in Imbak Canyon and Danum).
Recent trends in the intrinsic water-use efficiency of ringless rainforest trees in Borneo
Loader, N. J.; Walsh, R. P. D.; Robertson, I.; Bidin, K.; Ong, R. C.; Reynolds, G.; McCarroll, D.; Gagen, M.; Young, G. H. F.
2011-01-01
Stable carbon isotope (δ13C) series were developed from analysis of sequential radial wood increments from AD 1850 to AD 2009 for four mature primary rainforest trees from the Danum and Imbak areas of Sabah, Malaysia. The aseasonal equatorial climate meant that conventional dendrochronology was not possible as the tree species investigated do not exhibit clear annual rings or dateable growth bands. Chronology was established using radiocarbon dating to model age–growth relationships and date the carbon isotopic series from which the intrinsic water-use efficiency (IWUE) was calculated. The two Eusideroxylon zwageri trees from Imbak yielded ages of their pith/central wood (±1 sigma) of 670 ± 40 and 759 ± 40 years old; the less dense Shorea johorensis and Shorea superba trees at Danum yielded ages of 240 ± 40 and 330 ± 40 years, respectively. All trees studied exhibit an increase in the IWUE since AD 1960. This reflects, in part, a response of the forest to increasing atmospheric carbon dioxide concentration. Unlike studies of some northern European trees, no clear plateau in this response was observed. A change in the IWUE implies an associated modification of the local carbon and/or hydrological cycles. To resolve these uncertainties, a shift in emphasis away from high-resolution studies towards long, well-replicated time series is proposed to develop the environmental data essential for model evaluation. Identification of old (greater than 700 years) ringless trees demonstrates their potential in assessing the impacts of climatic and atmospheric change. It also shows the scientific and applied value of a conservation policy that ensures the survival of primary forest containing particularly old trees (as in Imbak Canyon and Danum). PMID:22006972
Menéndez González, Manuel; Suárez-Sanmartin, Esther; García, Ciara; Martínez-Camblor, Pablo; Westman, Eric; Simmons, Andy
2016-03-26
Though a disproportionate rate of atrophy in the medial temporal lobe (MTA) represents a reliable marker of Alzheimer's disease (AD) pathology, measurement of the MTA is not currently widely used in daily clinical practice. This is mainly because the methods available to date are sophisticated and difficult to implement in clinical practice (volumetric methods), are poorly explored (linear and planimetric methods), or lack objectivity (visual rating). Here, we aimed to compare the results of a manual planimetric measure (the yearly rate of absolute atrophy of the medial temporal lobe, 2D-yrA-MTL) with the results of an automated volumetric measure (the yearly rate of atrophy of the hippocampus, 3D-yrA-H). A series of 1.5T MRI studies on 290 subjects in the age range of 65-85 years, including patients with AD (n = 100), mild cognitive impairment (MCI) (n = 100), and matched controls (n = 90) from the AddNeuroMed study, were examined by two independent subgroups of researchers: one in charge of volumetric measures and the other in charge of planimetric measures. The means of both methods were significantly different between AD and the other two diagnostic groups. In the differential diagnosis of AD against controls, 3D-yrA-H performed significantly better than 2D-yrA-MTL while differences were not statistically significant in the differential diagnosis of AD against MCI. Automated volumetry of the hippocampus is superior to manual planimetry of the MTL in the diagnosis of AD. Nevertheless, the 2D-yrAMTL is a simpler method that could be easily implemented in clinical practice when volumetry is not available.
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE algorithm. The CMVE software is available upon request from the authors.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Complexity and Synchronicity of Resting State BOLD FMRI in Normal Aging and Cognitive Decline
Liu, Collin Y; Krishnan, Anitha P; Yan, Lirong; Smith, Robert X; Kilroy, Emily; Alger, Jeffery R; Ringman, John M; Wang, Danny JJ
2012-01-01
Purpose To explore the use of approximate entropy (ApEn) as an index of the complexity and the synchronicity of resting state BOLD fMRI in normal aging and cognitive decline associated with familial Alzheimer’s disease (fAD). Materials and Methods Resting state BOLD fMRI data were acquired at 3T from 2 independent cohorts of subjects consisting of healthy young (age 23±2 years, n=8) and aged volunteers (age 66±3 years, n=8), as well as 22 fAD associated subjects (14 mutation carriers, age 41.2±15.8 years; and 8 non-mutation carrying family members, age 28.8±5.9 years). Mean ApEn values were compared between the two age groups, and correlated with cognitive performance in the fAD group. Cross-ApEn (C-ApEn) was further calculated to assess the asynchrony between precuneus and the rest of the brain. Results Complexity of brain activity measured by mean ApEn in gray and white matter decreased with normal aging. In the fAD group, cognitive impairment was associated with decreased mean ApEn in gray matter as well as decreased regional ApEn in right precuneus, right lateral parietal regions, left precentral gyrus, and right paracentral gyrus. A pattern of asynchrony between BOLD fMRI series emerged from C-ApEn analysis, with significant regional anti-correlation with cross-correlation coefficient of functional connectivity analysis. Conclusion ApEn and C-ApEn may be useful for assessing the complexity and synchronicity of brain activity in normal aging and cognitive decline associated with neurodegenerative diseases PMID:23225622
Numerical simulation of VAWT on the effects of rotation cylinder
NASA Astrophysics Data System (ADS)
Xing, Shuda; Cao, Yang; Ren, Fuji
2017-06-01
Based on Finite Element Analysis Method, studying on Vertical Axis Wind Turbine (VAWT) which is added rotating cylinder in front of its air foils, especially focusing on the analysis of NACA6 series air foils about variation of lift to drag ratio. Choosing the most suitable blades with rotary cylinder added on leading edge. Analysis indicates that the front rotating cylinders on the VAWT is benefit to lift rise and drag fall. The most suitable air foil whose design lift coefficient is 0.8, the blades relative thickness is 20%, and the optimistic tip speed ratio is about 7.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-23
... Airworthiness Directives; Rolls-Royce plc (RR) RB211-22B and RB211-524 Series Turbofan Engines AGENCY: Federal... Rolls-Royce plc: Amendment 39-16402. Docket No. FAA-2009- 1157; Directorate Identifier 2009-NE-26-AD...) None. Applicability (c) This AD applies to Rolls-Royce plc RB211-22B series and RB211-524B4-D-02, RB211...
78 FR 78694 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... all Airbus Model A330-200 and -300 series airplanes, and Model A340-200 and -300 series airplanes. AD..., and corrective actions if needed. This new AD expands the applicability, reduces the compliance time... the comment received. Request To Change Compliance Time US Airways requested that we change the...
78 FR 61171 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... (RR) RB211-535E4-B-37 series turbofan engines. This AD requires removal of affected parts using a...-B-37 series turbofan engines. (d) Unsafe Condition This AD was prompted by recalculating the lives...
77 FR 65799 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-31
... Airworthiness Directives; Airbus Airplanes AGENCY: Federal Aviation Administration (FAA), Department of... Airbus Model A330-200 freighter series airplanes, Model A330-200 and - 300 series airplanes, and Model... [Amended] 0 2. The FAA amends Sec. 39.13 by adding the following new AD: 2012-21-20 Airbus: Amendment 39...
76 FR 77934 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
...-1321; Directorate Identifier 2011-NM-045-AD] RIN 2120-AA64 Airworthiness Directives; Airbus Airplanes...: We propose to adopt a new airworthiness directive (AD) for certain Airbus Model A319 series airplanes, Model A320-211, -212, -214, -231, -232, and -233 airplanes, and Model A321 series airplanes that would...
Mattle, Eveline; Weiger, Markus; Schmidig, Daniel; Boesiger, Peter; Fey, Michael
2009-06-01
Hair care for humans is a major world industry with specialised tools, chemicals and techniques. Studying the effect of hair care products has become a considerable field of research, and besides mechanical and optical testing numerous advanced analytical techniques have been employed in this area. In the present work, another means of studying the properties of hair is added by demonstrating the feasibility of magnetic resonance imaging (MRI) of the human hair. Established dedicated nuclear magnetic resonance microscopy hardware (solenoidal radiofrequency microcoils and planar field gradients) and methods (constant time imaging) were adapted to the specific needs of hair MRI. Images were produced at a spatial resolution high enough to resolve the inner structure of the hair, showing contrast between cortex and medulla. Quantitative evaluation of a scan series with different echo times provided a T*(2) value of 2.6 ms for the cortex and a water content of about 90% for hairs saturated with water. The demonstration of the feasibility of hair MRI potentially adds a new tool to the large variety of analytical methods used nowadays in the development of hair care products.
Application of Taylor's series to trajectory propagation
NASA Technical Reports Server (NTRS)
Stanford, R. H.; Berryman, K. W.; Breckheimer, P. J.
1986-01-01
This paper describes the propagation of trajectories by the application of the preprocessor ATOMCC which uses Taylor's series to solve initial value problems in ordinary differential equations. Comparison of the results obtained with those from other methods are presented. The current studies indicate that the ATOMCC preprocessor is an easy, yet fast and accurate method for generating trajectories.
Methodology for adding glycemic index and glycemic load values to 24-hour dietary recall database.
Olendzki, Barbara C; Ma, Yunsheng; Culver, Annie L; Ockene, Ira S; Griffith, Jennifer A; Hafner, Andrea R; Hebert, James R
2006-01-01
We describe a method of adding the glycemic index (GI) and glycemic load (GL) values to the nutrient database of the 24-hour dietary recall interview (24HR), a widely used dietary assessment. We also calculated daily GI and GL values from the 24HR. Subjects were 641 healthy adults from central Massachusetts who completed 9067 24HRs. The 24HR-derived food data were matched to the International Table of Glycemic Index and Glycemic Load Values. The GI values for specific foods not in the table were estimated against similar foods according to physical and chemical factors that determine GI. Mixed foods were disaggregated into individual ingredients. Of 1261 carbohydrate-containing foods in the database, GI values of 602 foods were obtained from a direct match (47.7%), accounting for 22.36% of dietary carbohydrate. GI values from 656 foods (52.1%) were estimated, contributing to 77.64% of dietary carbohydrate. The GI values from three unknown foods (0.2%) could not be assigned. The average daily GI was 84 (SD 5.1, white bread as referent) and the average GL was 196 (SD 63). Using this methodology for adding GI and GL values to nutrient databases, it is possible to assess associations between GI and/or GL and body weight and chronic disease outcomes (diabetes, cancer, heart disease). This method can be used in clinical and survey research settings where 24HRs are a practical means for assessing diet. The implications for using this methodology compel a broader evaluation of diet with disease outcomes.
NASA Astrophysics Data System (ADS)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems.
Further analyses of laminar flow heat transfer in circular sector ducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Q.M.; Trupp, A.C.
1989-11-01
Heat transfer in circular sector ducts is often encountered in multipassage tubes. Certain flow characteristics of circular sector ducts for apex angles up to {pi} have been determined as documented by Shah and London (1978). Recently, Lei and Trupp (1989) have more completely analyzed the flow characteristics of fully developed laminar flow for apex angles up to 2{pi}, including the location of the maximum velocity. Heat transfer results of fully developed laminar flow in circular sector ducts are also available for certain boundary conditions. Trupp and Lau (1984) numerically determined the average Nusselt number (Nu{sub T}) for isothermal walls. Eckertmore » et al. (1958) initially derived an analytical expression for the temperature profile for the case of H1. Sparrow and Haji-angles up to {pi}. However, the above work required numerical integration (or equivalent) to obtain a value for Nu{sub H1}. Regarding the H1{sub ad} boundary condition, Date (1974) numerically obtained a limiting value of Nu{sub H1}{sub ad} for the semicircular duct from the prediction of circular tubes containing a twisted tape (straight and nonconducting tape). Hong and Bergles (1976) also reported an asymptotic value of Nu{sub H1}{sub ad} for the semicircular duct from their entrance region solution. Otherwise it appears that there are no published analytical results of Nu{sub H1}{sub ad} for circular sector ducts. The purpose of this technical note is to communicate these results. In addition, a novel series expression for Nu{sub H1} is presented together with results for apex angles up to 2{pi}.« less
Silva, Maria Inês Barreto; Lemos, Carla Cavalheiro da Silva; Torres, Márcia Regina Simas Gonçalves; Bregman, Rachel
2014-03-01
Chronic kidney disease (CKD) is associated with metabolic disorders, including insulin resistance (IR), mainly when associated with obesity and characterized by high abdominal adiposity (AbAd). Anthropometric measures are recommended for assessing AbAd in clinical settings, but their accuracies need to be evaluated. The aim of this study was to evaluate the precision of different anthropometric measures of AbAd in patients with CKD. We also sought to determine the AbAd association with high homeostasis model assessment index of insulin resistance (HOMA-IR) values and the cutoff point for AbAd index to predict high HOMA-IR values. A subset of clinically stable nondialyzed patients with CKD followed at a multidisciplinary outpatient clinic was enrolled in this cross-sectional study. The accuracy of the following anthropometric indices: waist circumference, waist-to-hip ratio, conicity index and waist-to-height ratio (WheiR) to assess AbAd, was evaluated using trunk fat, by dual x-ray absorptiometry (DXA), as a reference method. HOMA-IR was estimated to stratify patients in high and low HOMA-IR groups. The total area under the receiver-operating characteristic curves (AUC-ROC; sensitivity/specificity) was calculated: AbAd with high HOMA-IR values (95% confidence interval [CI]). We studied 134 patients (55% males; 54% overweight/obese, body mass index ≥ 25 kg/m(2), age 64.9 ± 12.5 y, estimated glomerular filtration rate 29.0 ± 12.7 mL/min). Among studied AbAd indices, WheiR was the only one to show correlation with DXA trunk fat after adjusting for confounders (P < 0.0001). Thus, WheiR was used to evaluate the association between AbAd with HOMA-IR values (r = 0.47; P < 0.0001). The cutoff point for WheiR as a predictor for high HOMA-IR values was 0.55 (AUC-ROC = 0.69 ± 0.05; 95% CI, 0.60-0.77; sensitivity/specificity, 68.9/61.9). WheiR is recommended as an effective and precise anthropometric index to assess AbAd and to predict high HOMA-IR values in nondialyzed patients with CKD. Copyright © 2014 Elsevier Inc. All rights reserved.
A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series
NASA Astrophysics Data System (ADS)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2016-02-01
Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
Darlenski, Razvigor; Kazandjieva, Jana; Tsankov, Nikolai; Fluhr, Joachim W
2013-11-01
The aim of the study was to disclose interactions between epidermal barrier, skin irritation and sensitization in healthy and diseased skin. Transepidermal water loss (TEWL) and stratum corneum hydration (SCH) were assessed in adult patients with atopic dermatitis (AD), rosacea and healthy controls. A 4-h patch test with seven concentrations of sodium lauryl sulphate was performed to determine the irritant threshold (IT). Contact sensitization pattern was revealed by patch testing with European baseline series. Subjects with a lower IT had higher TEWL values and lower SCH. Subjects with positive allergic reactions had significantly lower IT. In AD, epidermal barrier deterioration was detected on both volar forearm and nasolabial fold, while in rosacea, impeded skin physiology parameters were observed on the facial skin only, suggesting that barrier impediment is restricted to the face in rosacea, in contrast with AD where the abnormal skin physiology is generalized. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Liu, Genyan; Ozoe, Fumiyo; Furuta, Kenjiro; Ozoe, Yoshihisa
2015-07-22
The insect GABA receptor (GABAR), which is composed of five RDL subunits, represents an important target for insecticides. A series of 4,5-disubstituted 3-isoxazolols, including muscimol analogues, were synthesized and examined for their activities against four splice variants (ac, ad, bc, and bd) of housefly GABARs expressed in Xenopus oocytes. Muscimol was a more potent agonist than GABA in all four splice variants, whereas synthesized analogues did not exhibit agonism but rather antagonism in housefly GABARs. The introduction of bicyclic aromatic groups at the 4-position of muscimol and the simultaneous replacement of the aminomethyl group with a carbamoyl group at the 5-position to afford six 4-aryl-5-carbamoyl-3-isoxazolols resulted in compounds that exhibited significantly enhanced antagonism with IC50 values in the low micromolar range in the ac variant. The inhibition of GABA-induced currents by 100 μM analogues was approximately 1.5-4-fold greater in the ac and bc variants than in the ad and bd variants. 4-(3-Biphenylyl)-5-carbamoyl-3-isoxazolol displayed competitive antagonism, with IC50 values of 30, 34, 107, and 96 μM in the ac, bc, ad, and bd variants, respectively, and exhibited moderate insecticidal activity against houseflies, with an LD50 value of 5.6 nmol/fly. These findings suggest that these 3-isoxazolol analogues are novel lead compounds for the design and development of insecticides that target the orthosteric site of housefly GABARs.
Snoeckx, Ramses; Ozkan, Alp; Reniers, Francois; Bogaerts, Annemie
2017-01-20
Recycling of carbon dioxide by its conversion into value-added products has gained significant interest owing to the role it can play for use in an anthropogenic carbon cycle. The combined conversion with H 2 O could even mimic the natural photosynthesis process. An interesting gas conversion technique currently being considered in the field of CO 2 conversion is plasma technology. To investigate whether it is also promising for this combined conversion, we performed a series of experiments and developed a chemical kinetics plasma chemistry model for a deeper understanding of the process. The main products formed were the syngas components CO and H 2 , as well as O 2 and H 2 O 2 , whereas methanol formation was only observed in the parts-per-billion to parts-per-million range. The syngas ratio, on the other hand, could easily be controlled by varying both the water content and/or energy input. On the basis of the model, which was validated with experimental results, a chemical kinetics analysis was performed, which allowed the construction and investigation of the different pathways leading to the observed experimental results and which helped to clarify these results. This approach allowed us to evaluate this technology on the basis of its underlying chemistry and to propose solutions on how to further improve the formation of value-added products by using plasma technology. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical methods for change-point detection in surface temperature records
NASA Astrophysics Data System (ADS)
Pintar, A. L.; Possolo, A.; Zhang, N. F.
2013-09-01
We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.
Stratum corneum hydration and skin surface pH in patients with atopic dermatitis.
Knor, Tanja; Meholjić-Fetahović, Ajša; Mehmedagić, Aida
2011-01-01
Atopic dermatitis (AD) is a chronically relapsing skin disease with genetic predisposition, which occurs most frequently in preschool children. It is considered that dryness and pruritus, which are always present in AD, are in correlation with degradation of the skin barrier function. Measurement of hydration and pH value of the stratum corneum is one of the noninvasive methods for evaluation of skin barrier function. The aim of the study was to assess skin barrier function by measuring stratum corneum hydration and skin surface pH of the skin with lesions, perilesional skin and uninvolved skin in AD patients, and skin in a healthy control group. Forty-two patients were included in the study: 21 young and adult AD patients and 21 age-matched healthy controls. Capacitance, which is correlated with hydration of stratum corneum and skin surface pH were measured on the forearm in the above areas by SM810/CM820/pH900 combined units (Courage AND Khazaka, Germany). The mean value of water capacitance measured in AD patients was 44.1 ± 11.6 AU (arbitrary units) on the lesions, 60.2 ± 12.4 AU on perilesional skin and 67.2 ± 8.8 AU on uninvolved skin. In healthy controls, the mean value was 74.1 ± 9.2 AU. The mean pH value measured in AD patients was 6.13 ± 0.52 on the lesions, 5.80 ± 0.41 on perilesional skin, and 5.54 ± 0.49 on uninvolved skin. In control group, the mean pH of the skin surface was 5.24 ± 0.40. The values of both parameters measured on lesional skin were significantly different (capacitance decreased and pH increased) from the values recorded on perilesional skin and uninvolved skin. The same held for the relation between perilesional and uninvolved skin. According to study results, the uninvolved skin of AD patients had significantly worse values of the measured parameters as compared with control group. The results of this study suggested the skin barrier function to be degraded in AD patients, which is specifically expressed in lesional skin.
NASA Astrophysics Data System (ADS)
Kopeć, Jacek M.; Kwiatkowski, Kamil; de Haan, Siebren; Malinowski, Szymon P.
2016-05-01
Navigational information broadcast by commercial aircraft in the form of Mode-S EHS (Mode-S Enhanced Surveillance) and ADS-B (Automatic Dependent Surveillance-Broadcast) messages can be considered a new source of upper tropospheric and lower stratospheric turbulence estimates. A set of three processing methods is proposed and analysed using a quality record of turbulence encounters made by a research aircraft.The proposed methods are based on processing the vertical acceleration or the background wind into the eddy dissipation rate. Turbulence intensity can be estimated using the standard content of the Mode-S EHS/ADS-B.The results are based on a Mode-S EHS/ADS-B data set generated synthetically based on the transmissions from the research aircraft. This data set was validated using the overlapping record of the Mode-S EHS/ADS-B received from the same research aircraft. The turbulence intensity, meaning the eddy dissipation rate, obtained from the proposed methods based on the Mode-S EHS/ADS-B is compared with the value obtained using on-board accelerometer. The results of the comparison indicate the potential of the methods. The advantages and limitation of the presented approaches are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Qing, E-mail: hqng@163.com; Mao, Xinhua, E-mail: 30400414@qq.com; Chu, Dongliang, E-mail: 569256386@qq.com
This study proposes an optimized frequency adjustment method that uses a micro-cantilever beam-based piezoelectric vibration generator based on a combination of added mass and capacitance. The most important concept of the proposed method is that the frequency adjustment process is divided into two steps: the first is a rough adjustment step that changes the size of the mass added at the end of cantilever to adjust the frequency in a large-scale and discontinuous manner; the second step is a continuous but short-range frequency adjustment via the adjustable added capacitance. Experimental results show that when the initial natural frequency of amore » micro piezoelectric vibration generator is 69.8 Hz, then this natural frequency can be adjusted to any value in the range from 54.2 Hz to 42.1 Hz using the combination of the added mass and the capacitance. This method simply and effectively matches a piezoelectric vibration generator’s natural frequency to the vibration source frequency.« less
The Impact of Alzheimer's Disease on the Chinese Economy
Keogh-Brown, Marcus R.; Jensen, Henning Tarp; Arrighi, H. Michael; Smith, Richard D.
2015-01-01
Background Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. Methods AD-related prevalence, morbidity and mortality for 2011–2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Findings Simulated Chinese AD prevalence quadrupled during 2011–50 from 6–28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Interpretation Health and macroeconomic models predict an unfolding 2011–2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies. PMID:26981556
78 FR 20509 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-05
... Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed rulemaking...) RB211-535E4-B-37 series turbofan engines. This proposed AD was prompted by recalculating the life of.... (c) Applicability This AD applies to Rolls-Royce plc (RR) RB211-535E4-B-37 series turbofan engines...
77 FR 76977 - Airworthiness Directives; General Electric Company Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... Company Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Supplemental notice... proposed airworthiness directive (AD) for certain General Electric Company (GE) CF6-80C2 series turbofan... part 39 to include an AD that would apply to certain GE CF6-80C2 series turbofan engines. That NPRM...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Airworthiness Directives; Various Aircraft Equipped With Rotax Aircraft Engines 912 A Series Engines AGENCY...: This Airworthiness Directive (AD) results from reports of cracks in the engine crankcase. Austro... crankcase assembly has permitted to reduce applicability of the new AD, when based on engines' serial...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-29
... Airworthiness Directives; British Aerospace Regional Aircraft Model Jetstream Series 3101 and Jetstream Model... INFORMATION CONTACT: Taylor Martin, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301... [Amended] 0 2. The FAA amends Sec. 39.13 by adding the following new AD: 2010-09-02 British Aerospace...
77 FR 40485 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
... Airworthiness Directives; Airbus Airplanes AGENCY: Federal Aviation Administration (FAA), Department of... (AD) for all Airbus Model A300 series airplanes; all Model A300 B4-600, B4-600R, and F4-600R series... new AD: 2012-13-06 Airbus: Amendment 39-17108. Docket No. FAA-2012-0040; Directorate Identifier 2011...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... section of the published AD, we incorrectly included Cessna 188 series airplanes. In the Unsafe Condition... sections of the AD incorrectly included Cessna 188 series airplanes. The Unsafe Condition section is... the second column, on line 10, under the heading DEPARTMENT OF TRANSPORTATION, remove 188 from...
78 FR 78701 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... the time given in AD 2011-12-09. (i) Ground Fault Interrupt (GFI) Relay Position Change For airplanes... Company Model 737-300, -400, and -500 series airplanes. This AD was prompted by fuel system reviews... Model 737-300, -400, and - 500 series airplanes; certificated in any category; identified as Groups 5, 6...
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-01-01
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-11-03
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-04-01
In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.
Which homogenisation method is appropriate for daily time series of relative humidity?
NASA Astrophysics Data System (ADS)
Chimani, Barbara; Nemec, Johanna; Auer, Ingeborg; Venema, Victor
2014-05-01
Data homogenisation is an essential part of reliable climate data analyses. Different tools for detecting and adjusting breaks in daily extreme temperatures (Tmin, Tmax) and daily precipitation sums were developed in the last years. Due to its influence on health, plants and construction relative humidity is another parameter of great importance. On the basis of 6 networks of measured (and homogenized with respect to the monthly means) relative humidity data, which cover different climatic areas in Austria, a synthetic data set for testing and validating homogenisation methods was built. Each network consists of 4 to 6 station time series with a minimum length of 5 years. The so-called surrogate networks resemble the statistical properties (e.g. distribution of parameter, auto- and cross correlation within the network) of the measured time series, but are extended to 100 year long time series, which are in a first step assumed to be homogeneous. For creating the best possible surrogate dataset of relative humidity detailed statistical information on potential inhomogeneities is decisive. Information on the potential breaks was taken from parallel measurements available for some Austrian locations, mostly representing changes in instrumentation and/or station relocation. Beside changes in the distribution of the parameter the analyses includes an estimation of changes in the number of missing data, global and local biases, both on a seasonal and annual basis. An additional break is to be expected in the Austrian time series due to a change in observation time in 1970/1971. Since this change occurred simultaneously at all Austrian climate stations, standard homogenisation methods, which rely on a comparison with reference stations, are not able to detect or correct this shift. Therefore an independent correction method for this type of break, to be applied before homogenisation was developed. This type of change point was not included in the surrogate network. Artificial inhomogenities were introduced to the dataset in three steps: (1) deterministic change points: within one homogeneous sub-period (HSP) a constant perturbation is added to each relative humidity values, (2) deterministic + random changes: random changes do not change the mean of the HSP but can affect the distribution of the parameter, (3) in addition realistic changes in break frequency and missing data. In order to tests the efficiency of homogenisation methods, the procedure was separated in break detection and adjustment of inhomogenities. The methods MASH (Szentimrey, 1999), ACMANT (Domonkos, 2011), PRODIGE (Caussinus and Mestre, 2004), SNHT (Alexandersson, 1986), Vincent (Vincent, 1998), E-P method (Easterling and Peterson, 1995) and Bivariate test (Maronna and Yohai, 1978) were selected for break detection. Break detection is in all methods restricted to monthly, seasonal or annual data. Since we are dealing with daily data, the amount of methods for break correction is reduced and we concentrate on the following methods: MASH, Vincent, SPLIDHOM (Mestre et al., 2011) and the percentile method (Stepanek, 2009). Information on the statistical characteristics of breaks in relative humidity series, the correction method concerning the changed observation times and first results concerning break detection will be presented.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-11-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-01-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
The value of vital sign trends for detecting clinical deterioration on the wards.
Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P
2016-05-01
Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient's current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC -0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Predicting missing values in a home care database using an adaptive uncertainty rule method.
Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N
2005-01-01
Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.
The change of CO2 emission on manufacturing sectors in Indonesia: An input-output analysis
NASA Astrophysics Data System (ADS)
Putranti, Titi Muswati; Imansyah, Muhammad Handry
2017-12-01
The objective of this paper is to evaluate the change of CO2 emission on manufacturing sectors in Indonesia using input-output analysis. The method used supply perspective can measure the impact of an increase in the value added of different productive on manufacturing sectors on total CO2 emission and can identify the productive sectors responsible for the increase in CO2 emission when there is an increase in the value added of the economy. The data used are based on Input-Output Energy Table 1990, 1995 and 2010. The method applied the elasticity of CO2 emission to value added. Using the elasticity approach, one can identify the highest elasticity on manufacturing sector as the change of value added provides high response to CO2 emission. Therefore, policy maker can concentrate on manufacturing sectors with the high response of CO2 emission due to the increase of value added. The approach shows the contribution of the various sectors that deserve more consideration for mitigation policy. Five of highest elasticity of manufacturing sectors of CO2 emission are Spinning & Weaving, Other foods, Tobacco, Wearing apparel, and other fabricated textiles products in 1990. Meanwhile, the most sensitive sectors Petroleum refinery products, Other chemical products, Timber & Wooden Products, Iron & Steel Products and Other non-metallic mineral products in 1995. Two sectors of the 1990 were still in the big ten, i.e. Spinning & weaving and Other foods in 1995 for the most sensitive sectors. The six sectors of 1995 in the ten highest elasticity of CO2 emission on manufacturing which were Plastic products, Other chemical products,Other fabricated metal products, Cement, Iron & steel products, Iron & steel, still existed in 2010 condition. The result of this research shows that there is a change in the most elastic CO2 emission of manufacturing sectors which tends from simple and light manufacturing to be a more complex and heavier manufacturing. Consequently, CO2 emission jumped significantly.
Skills Methods to Prevent Smoking.
ERIC Educational Resources Information Center
Schinke, Steven Paul; And Others
1986-01-01
Describes an evaluation of the added value of skills methods for preventing smoking with sixth-grade students from two schools. Skills conditions subjects learned problem-solving, self-instruction, and interpersonal communication methods. The article discusses the strengths, limits, and implications of the study for other smoking prevention…
[Value-Added--Adding Economic Value in the Food Industry].
ERIC Educational Resources Information Center
Welch, Mary A., Ed.
1989-01-01
This booklet focuses on the economic concept of "value added" to goods and services. A student activity worksheet illustrates how the steps involved in processing food are examples of the concept of value added. The booklet further links food processing to the idea of value added to the Gross National Product (GNP). Discussion questions,…
Myths & Facts about Value-Added Analysis
ERIC Educational Resources Information Center
TNTP, 2011
2011-01-01
This paper presents myths as well as facts about value-added analysis. These myths include: (1) "Value-added isn't fair to teachers who work in high-need schools, where students tend to lag far behind academically"; (2) "Value-added scores are too volatile from year-to-year to be trusted"; (3) "There's no research behind value-added"; (4) "Using…
NASA Astrophysics Data System (ADS)
Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.
2004-12-01
Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the dendrochronological data here should not be generalized to all tree-ring records. The examples presented should be used as guidelines for detecting potential sources of bias and as illustrations of the usefulness of tree-ring records as palaeoclimate indicators.
NASA Astrophysics Data System (ADS)
Arimura, Hidetaka; Yoshiura, Takashi; Kumazawa, Seiji; Tanaka, Kazuhiro; Koga, Hiroshi; Mihara, Futoshi; Honda, Hiroshi; Sakai, Shuji; Toyofuku, Fukai; Higashida, Yoshiharu
2008-03-01
Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.
Gueto, Carlos; Ruiz, José L; Torres, Juan E; Méndez, Jefferson; Vivas-Reyes, Ricardo
2008-03-01
Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of benzotriazine derivatives, as Src inhibitors. Ligand molecular superimposition on the template structure was performed by database alignment method. The statistically significant model was established of 72 molecules, which were validated by a test set of six compounds. The CoMFA model yielded a q(2)=0.526, non cross-validated R(2) of 0.781, F value of 88.132, bootstrapped R(2) of 0.831, standard error of prediction=0.587, and standard error of estimate=0.351 while the CoMSIA model yielded the best predictive model with a q(2)=0.647, non cross-validated R(2) of 0.895, F value of 115.906, bootstrapped R(2) of 0.953, standard error of prediction=0.519, and standard error of estimate=0.178. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. Results indicate that small steric volumes in the hydrophobic region, electron-withdrawing groups next to the aryl linker region, and atoms close to the solvent accessible region increase the Src inhibitory activity of the compounds. In fact, adding substituents at positions 5, 6, and 8 of the benzotriazine nucleus were generated new compounds having a higher predicted activity. The data generated from the present study will further help to design novel, potent, and selective Src inhibitors as anticancer therapeutic agents.
Ballentine, Mark L; Ariyarathna, Thivanka; Smith, Richard W; Cooper, Christopher; Vlahos, Penny; Fallis, Stephen; Groshens, Thomas J; Tobias, Craig
2016-06-01
Hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) is globally one of the most commonly used military explosives and environmental contaminant. (15)N labeled RDX was added into a mesocosm containing 9 different coastal marine species in a time series experiment to quantify the uptake of RDX and assess the RDX derived (15)N retention into biota tissue. The (15)N attributed to munitions compounds reached steady state concentrations ranging from 0.04 to 0.67 μg (15)N g dw(-1), the bulk (15)N tissue concentration for all species was 1-2 orders of magnitude higher suggesting a common mechanism or pathway of RDX biotransformation and retention of (15)N. A toxicokinetic model was created that described the (15)N uptake, elimination, and transformation rates. While modeled uptake rates were within previous published values, elimination rates were several orders of magnitude smaller than previous studies ranging from 0.05 to 0.7 days(-1). These small elimination rates were offset by high rates of retention of (15)N previously not measured. Bioconcentration factors and related aqueous:organism ratios of compounds and tracer calculated using different tracer and non-tracer methods yielded a broad range of values (0.35-101.6 mL g(-1)) that were largely method dependent. Despite the method-derived variability, all values were generally low and consistent with little bioaccumulation potential. The use of (15)N labeled RDX in this study indicates four possible explanations for the observed distribution of compounds and tracer; each with unique potential implications for possible toxicological impacts in the coastal marine environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Perrone, Lorena; Grant, William B
2015-01-01
Considerable evidence indicates that diet is an important risk-modifying factor for Alzheimer's disease (AD). Evidence is also mounting that dietary advanced glycation end products (AGEs) are important risk factors for AD. This study strives to determine whether estimated dietary AGEs estimated from national diets and epidemiological studies are associated with increased AD incidence. We estimated values of dietary AGEs using values in a published paper. We estimated intake of dietary AGEs from the Washington Heights-Inwood Community Aging Project (WHICAP) 1992 and 1999 cohort studies, which investigated how the Mediterranean diet (MeDi) affected AD incidence. Further, AD prevalence data came from three ecological studies and included data from 11 countries for 1977-1993, seven developing countries for 1995-2005, and Japan for 1985-2008. The analysis used dietary AGE values from 20 years before the AD prevalence data. Meat was always the food with the largest amount of AGEs. Other foods with significant AGEs included fish, cheese, vegetables, and vegetable oil. High MeDi adherence results in lower meat and dairy intake, which possess high AGE content. By using two different models to extrapolate dietary AGE intake in the WHICAP 1992 and 1999 cohort studies, we showed that reduced dietary AGE significantly correlates with reduced AD incidence. For the ecological studies, estimates of dietary AGEs in the national diets corresponded well with AD prevalence data even though the cooking methods were not well known. Dietary AGEs appear to be important risk factors for AD.
76 FR 59013 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2010-0821; Directorate Identifier 2010-NE-30-AD; Amendment 39-16657; AD 2011-08-07] RIN 2120-AA64 Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines AGENCY: Federal Aviation...
77 FR 10355 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-22
... Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines AGENCY: Federal Aviation... effective March 28, 2012. ADDRESSES: For service information identified in this AD, contact Rolls-Royce plc... Rolls-Royce plc: Amendment 39-16956; Docket No. FAA-2010- 0755; Directorate Identifier 2010-NE-12-AD. (a...
Project Physics Programmed Instruction, Vectors 2.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
This is the second of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. It covers adding two or more vectors together, and finding a third vector that could be added to two given vectors to make a sum of zero. For other booklets in this series, see SE 015 549 and SE 015 551. (DT)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-21
... Airworthiness Directives; Various Aircraft Equipped With Rotax Aircraft Engines 912 A Series Engines AGENCY... reports of cracks in the engine crankcase. Austro Control GmbH (ACG) addressed the problem by issuing AD... applicability of the new AD, when based on engines' serial numbers (s/n). On the other hand, applicability is...
Coutinho, Artur M N; Porto, Fábio H G; Zampieri, Poliana F; Otaduy, Maria C; Perroco, Tíbor R; Oliveira, Maira O; Nunes, Rafael F; Pinheiro, Toulouse Leusin; Bottino, Cassio M C; Leite, Claudia C; Buchpiguel, Carlos A
2015-01-01
Reduction of regional brain glucose metabolism (rBGM) measured by [18F]FDG-PET in the posterior cingulate cortex (PCC) has been associated with a higher conversion rate from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Magnetic Resonance Spectroscopy (MRS) is a potential biomarker that has disclosed Naa/mI reductions within the PCC in both MCI and AD. Studies investigating the relationships between the two modalities are scarce. To evaluate differences and possible correlations between the findings of rBGM and NAA/mI in the PCC of individuals with AD, MCI and of cognitively normal volunteers. Patients diagnosed with AD (N=32) or MCI (N=27) and cognitively normal older adults (CG, N=28), were submitted to [18F]FDG-PET and MRS to analyze the PCC. The two methods were compared and possible correlations between the modalities were investigated. The AD group exhibited rBGM reduction in the PCC when compared to the CG but not in the MCI group. MRS revealed lower NAA/mI values in the AD group compared to the CG but not in the MCI group. A positive correlation between rBGM and NAA/mI in the PCC was found. NAA/mI reduction in the PCC differentiated AD patients from control subjects with an area under the ROC curve of 0.70, while [18F]FDG-PET yielded a value of 0.93. rBGM and Naa/mI in the PCC were positively correlated in patients with MCI and AD. [18F]FDG-PET had greater accuracy than MRS for discriminating AD patients from controls.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Quantification of the Relationship between Surrogate Fuel Structure and Performance
2012-07-31
order to account for know deficiencies [18]. The frequencies are then used to calculate the zero point energy ( ZPE ). In the G3 theory HF/6-31G* was used...for the ZPE and the new procedure is likely to be more reliable. Also in contrast to previous G series composite methods, the Hartree–Fock energy...The total energy is obtained by adding the previously calculated ZPE . Durant and Rohlfing [38] reported that B3LYP density functional methods provide
2011-01-01
Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303
26 CFR 1.148-3 - General arbitrage rebate rules.
Code of Federal Regulations, 2013 CFR
2013-04-01
... at the end of any period is determined using the economic accrual method and equals the value of that..., when added to the future value, as of the computation date, of previous rebate payments made for the... any date, the rebate amount for an issue is the excess of the future value, as of that date, of all...
26 CFR 1.148-3 - General arbitrage rebate rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... at the end of any period is determined using the economic accrual method and equals the value of that..., when added to the future value, as of the computation date, of previous rebate payments made for the... any date, the rebate amount for an issue is the excess of the future value, as of that date, of all...
26 CFR 1.148-3 - General arbitrage rebate rules.
Code of Federal Regulations, 2012 CFR
2012-04-01
... at the end of any period is determined using the economic accrual method and equals the value of that..., when added to the future value, as of the computation date, of previous rebate payments made for the... any date, the rebate amount for an issue is the excess of the future value, as of that date, of all...
26 CFR 1.148-3 - General arbitrage rebate rules.
Code of Federal Regulations, 2011 CFR
2011-04-01
... at the end of any period is determined using the economic accrual method and equals the value of that..., when added to the future value, as of the computation date, of previous rebate payments made for the... any date, the rebate amount for an issue is the excess of the future value, as of that date, of all...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
This paper intends to reveal the ability of the linear interpolation method to predict missing values in solar radiation time series. Reliable dataset is equally tends to complete time series observed dataset. The absence or presence of radiation data alters long-term variation of solar radiation measurement values. Based on that change, the opportunities to provide bias output result for modelling and the validation process is higher. The completeness of the observed variable dataset has significantly important for data analysis. Occurrence the lack of continual and unreliable time series solar radiation data widely spread and become the main problematic issue. However,more » the limited number of research quantity that has carried out to emphasize and gives full attention to estimate missing values in the solar radiation dataset.« less
Ertekin-Taner, Nilüfer; Allen, Mariet; Fadale, Daniel; Scanlin, Leah; Younkin, Linda; Petersen, Ronald C; Graff-Radford, Neill; Younkin, Steven G
2004-04-01
Risk for late onset Alzheimer disease (LOAD) and plasma amyloid beta levels (Abeta42; encoded by APP), an intermediate phenotype for LOAD, show linkage to chromosome 10q. Several strong candidate genes (VR22, PLAU, IDE) lie within the 1-lod support interval for linkage. Others have independently identified haplotypes in the chromosome 10q region harboring IDE that show highly significant association with intermediate AD phenotypes and with risk for AD. To pursue these associations, we analyzed the same haplotypes for association with plasma Abeta42 in 24 extended LOAD families and for association with LOAD in two independent case-control series. One series (MCR, 188 age-matched case-control pairs) did not show association (p=0.64) with the six haplotypes in the 276-kb region spanning three genes (IDE, KNSL1, and HHEX) previously shown to associate with LOAD. The other series (MCJ, 109 age-matched case-control pairs) showed significant (p=0.003) association with these haplotypes. In the MCJ series, the H4 (odds ratio [OR]=5.1, p=0.003) and H2(H7) haplotypes (OR=0.60, p=0.04) had the same effects previously reported. In this series, the H8 haplotype (OR=2.7, p=0.098) also had an effect similar as in one previous case control series but not in others. In the extended families, the H8 haplotype was associated with significantly elevated plasma Abeta42 (p=0.02). In addition, the H5(H10) haplotype, which is associated with reduced risk for AD in the other study is associated with reduced plasma Abeta42 (p=0.007) in our family series. These results provide strong evidence for pathogenic variant(s) in the 276-kb region harboring IDE that influence intermediate AD phenotypes and risk for AD. Copyright 2004 Wiley-Liss, Inc.
Morales, Inelia; Guzmán-Martínez, Leonardo; Cerda-Troncoso, Cristóbal; Farías, Gonzalo A; Maccioni, Ricardo B
2014-01-01
Alzheimer disease (AD) is the most common cause of dementia in people over 60 years old. The molecular and cellular alterations that trigger this disease are still diffuse, one of the reasons for the delay in finding an effective treatment. In the search for new targets to search for novel therapeutic avenues, clinical studies in patients who used anti-inflammatory drugs indicating a lower incidence of AD have been of value to support the neuroinflammatory hypothesis of the neurodegenerative processes and the role of innate immunity in this disease. Neuroinflammation appears to occur as a consequence of a series of damage signals, including trauma, infection, oxidative agents, redox iron, oligomers of τ and β-amyloid, etc. In this context, our theory of Neuroimmunomodulation focus on the link between neuronal damage and brain inflammatory process, mediated by the progressive activation of astrocytes and microglial cells with the consequent overproduction of proinflammatory agents. Here, we discuss about the role of microglial and astrocytic cells, the principal agents in neuroinflammation process, in the development of neurodegenerative diseases such as AD. In this context, we also evaluated the potential relevance of natural anti-inflammatory components, which include curcumin and the novel Andean Compound, as agents for AD prevention and as a coadjuvant for AD treatments.
Morales, Inelia; Guzmán-Martínez, Leonardo; Cerda-Troncoso, Cristóbal; Farías, Gonzalo A.; Maccioni, Ricardo B.
2014-01-01
Alzheimer disease (AD) is the most common cause of dementia in people over 60 years old. The molecular and cellular alterations that trigger this disease are still diffuse, one of the reasons for the delay in finding an effective treatment. In the search for new targets to search for novel therapeutic avenues, clinical studies in patients who used anti-inflammatory drugs indicating a lower incidence of AD have been of value to support the neuroinflammatory hypothesis of the neurodegenerative processes and the role of innate immunity in this disease. Neuroinflammation appears to occur as a consequence of a series of damage signals, including trauma, infection, oxidative agents, redox iron, oligomers of τ and β-amyloid, etc. In this context, our theory of Neuroimmunomodulation focus on the link between neuronal damage and brain inflammatory process, mediated by the progressive activation of astrocytes and microglial cells with the consequent overproduction of proinflammatory agents. Here, we discuss about the role of microglial and astrocytic cells, the principal agents in neuroinflammation process, in the development of neurodegenerative diseases such as AD. In this context, we also evaluated the potential relevance of natural anti-inflammatory components, which include curcumin and the novel Andean Compound, as agents for AD prevention and as a coadjuvant for AD treatments. PMID:24795567
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
NASA Astrophysics Data System (ADS)
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
Evaluation of algorithms for geological thermal-inertia mapping
NASA Technical Reports Server (NTRS)
Miller, S. H.; Watson, K.
1977-01-01
The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).
Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances
NASA Astrophysics Data System (ADS)
Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.
2018-04-01
Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
GRRATS: A New Approach to Inland Altimetry Processing for Major World Rivers
NASA Astrophysics Data System (ADS)
Coss, S. P.
2016-12-01
Here we present work-in-progress results aimed at generating a new radar altimetry dataset GRRATS (Global River Radar Altimetry Time Series) extracted over global ocean-draining rivers wider than 900 m. GRATTS was developed as a component of the NASA MEaSUREs project (PI: Dennis Lettenmaier, UCLA) to generate pre-SWOT data products for decadal or longer global river elevation changes from multi-mission satellite radar altimetry data. The dataset at present includes 909 time series from 39 rivers. A new method of filtering VS (virtual station) height time series is presented where, DEM based heights were used to establish limits for the ice1 retracked Jason2 and Envisat heights at present. While GRRATS is following in the footsteps of several predecessors, it contributes to one of the critical climate data records in generating a validated and comprehensive hydrologic observations in river height. The current data product includes VSs in north and south Americas, Africa and Eurasia, with the most comprehensive set of Jason-2 and Envisat RA time series available for North America and Eurasia. We present a semi-automated procedure to process returns from river locations, identified with Landsat images and updated water mask extent. Consistent methodologies for flagging ice cover are presented. DEM heights used in height filtering were retained and can be used as river height profiles. All non-validated VS have been assigned a letter grade A-D to aid end users in selection of data. Validated VS are accompanied with a suite of fit statistics. Due to the inclusiveness of the dataset, not all VS were able to undergo validation (415 of 909), but those that were demonstrate that confidence in the data product is warranted. Validation was accomplished using records from 45 in situ gauges from 12 rivers. Meta-analysis was performed to compare each gauge with each VS by relative height. Preliminary validation results are as follows. 89.3% of the data have positive Nash Sutcliff Efficiency (NES) values, and the median NSE value is 0.73. The median standard deviation of error (STDE) is .92 m. GRRATS will soon be publicly available in NetCDF format with CF compliant metadata.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; ...
2017-01-20
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less
Lignin from Micro- to Nanosize: Production Methods
Beisl, Stefan; Miltner, Angela; Friedl, Anton
2017-01-01
Lignin is the second most abundant biopolymer after cellulose. It has long been obtained as a by-product of cellulose production in pulp and paper production, but had rather low added-value applications. A changing paper market and the emergence of biorefinery projects should generate vast amounts of lignin with the potential of value addition. Nanomaterials offer unique properties and the preparation of lignin nanoparticles and other nanostructures has therefore gained interest as a promising technique to obtain value-added lignin products. Due to lignin’s high structural and chemical heterogeneity, methods must be adapted to these different types. This review focuses on the ability of different formation methods to cope with the huge variety of lignin types and points out which particle characteristics can be achieved by which method. The current research’s main focus is on pH and solvent-shifting methods where the latter can yield solid and hollow particles. Solvent shifting also showed the capability to cope with different lignin types and solvents and antisolvents, respectively. However, process conditions have to be adapted to every type of lignin and reduction of solvent demand or the integration in a biorefinery process chain must be focused. PMID:28604584
Methods to recover value-added co-products from dry grind processing of grains into fuel ethanol
USDA-ARS?s Scientific Manuscript database
Three methods were described to fractionate condensed distillers solubles (CDS) into several new co-products, including a protein-mineral fraction and a glycerol fraction by a chemical method; a protein fraction, an oil fraction and a glycerol-mineral fraction by a physical method; or a protein frac...
What's the Value in Value-Added?
ERIC Educational Resources Information Center
Duffrin, Elizabeth
2011-01-01
A growing number of school districts are adopting "value-added" measures of teaching quality to award bonuses or even tenure. And two competitive federal grants are spurring them on. Districts using value-added data are encouraged by the results. But researchers who support value-added measures advise caution. The ratings, which use a…
Preparation and Enhanced Thermoelectric Performance of Cu2Se-SnSe Composite Materials
NASA Astrophysics Data System (ADS)
Peng, Zhi; He, Danqi; Mu, Xin; Zhou, Hongyu; Li, Cuncheng; Ma, Shifang; Ji, Pengxia; Hou, Weikang; Wei, Ping; Zhu, Wanting; Nie, Xiaolei; Zhao, Wenyu
2018-03-01
A series of p-type xCu2Se-SnSe (x = 0%, 0.10%, 0.15%, 0.20%, and 0.25%) composite thermoelectric materials have been prepared by the combination of ultrasonic dispersion and spark plasma sintering methods. The effects of secondary phase Cu2Se on the phase composition, microstructure, and thermoelectric properties of the composites were investigated. Microstructure characterization and elemental maps indicated Cu2Se grains uniformly distributed on the boundaries of the matrix. Transport measurements demonstrated that enhancement of the power factor and reduction of the thermal conductivity can be realized simultaneously by optimizing the adding content of Cu2Se. The highest ZT value of 0.51 at 773 K was achieved for the sample with x = 0.15%, increased by 24% compared with that of the SnSe matrix. These results demonstrate that optimizing the Cu2Se content can improve the thermoelectric performance of p-type SnSe polycrystalline materials.
Preparation and Enhanced Thermoelectric Performance of Cu2Se-SnSe Composite Materials
NASA Astrophysics Data System (ADS)
Peng, Zhi; He, Danqi; Mu, Xin; Zhou, Hongyu; Li, Cuncheng; Ma, Shifang; Ji, Pengxia; Hou, Weikang; Wei, Ping; Zhu, Wanting; Nie, Xiaolei; Zhao, Wenyu
2018-06-01
A series of p-type xCu2Se-SnSe ( x = 0%, 0.10%, 0.15%, 0.20%, and 0.25%) composite thermoelectric materials have been prepared by the combination of ultrasonic dispersion and spark plasma sintering methods. The effects of secondary phase Cu2Se on the phase composition, microstructure, and thermoelectric properties of the composites were investigated. Microstructure characterization and elemental maps indicated Cu2Se grains uniformly distributed on the boundaries of the matrix. Transport measurements demonstrated that enhancement of the power factor and reduction of the thermal conductivity can be realized simultaneously by optimizing the adding content of Cu2Se. The highest ZT value of 0.51 at 773 K was achieved for the sample with x = 0.15%, increased by 24% compared with that of the SnSe matrix. These results demonstrate that optimizing the Cu2Se content can improve the thermoelectric performance of p-type SnSe polycrystalline materials.
Value added medicines: what value repurposed medicines might bring to society?
Toumi, Mondher; Rémuzat, Cécile
2017-01-01
Background & objectives : Despite the wide interest surrounding drug repurposing, no common terminology has been yet agreed for these products and their full potential value is not always recognised and rewarded, creating a disincentive for further development. The objectives of the present study were to assess from a wide perspective which value drug repurposing might bring to society, but also to identify key obstacles for adoption of these medicines and to discuss policy recommendations. Methods : A preliminary comprehensive search was conducted to assess how the concept of drug repurposing was described in the literature. Following completion of the literature review, a primary research was conducted to get perspective of various stakeholders across EU member states on drug repurposing ( healthcare professionals, regulatory authorities and Health Technology Assessment (HTA) bodies/payers, patients, and representatives of the pharmaceutical industry developing medicines in this field). Ad hoc literature review was performed to illustrate, when appropriate, statements of the various stakeholders. Results : Various nomenclatures have been used to describe the concept of drug repurposing in the literature, with more or less broad definitions either based on outcomes, processes, or being a mix of both. In this context, Medicines for Europe (http://www.medicinesforeurope.com/value-added-medicines/) established one single terminology for these medicines, known as value added medicines, defined as 'medicines based on known molecules that address healthcare needs and deliver relevant improvements for patients, healthcare professionals and/or payers'. Stakeholder interviews highlighted three main potential benefits for value added medicines: (1) to address a number of medicine-related healthcare inefficiencies related to irrational use of medicines, non-availability of appropriate treatment options, shortage of mature products, geographical inequity in medicine access; (2) to improve healthcare system efficiency; and (3) to contribute to sustainability of healthcare systems through economic advantages. Current HTA framework, generic stigma, and pricing rules, such as internal reference pricing or tendering processes in place in some countries, were reported as the current key hurdles preventing the full recognition of value added medicines' benefits, discouraging manufacturers from bringing such products to the market. Discussion & conclusions : There is currently a gap between increasing regulatory authority interest in capturing value added medicines' benefits and the resistance of HTA bodies/payers, who tend to ignore this important segment of the pharmaceutical field. This situation calls for policy changes to foster appropriate incentives to enhance value recognition of value added medicines and deliver the expected benefit to society. Policy changes from HTA perspective should include: absence of any legislative barriers preventing companies from pursuing HTA; HTA requirements proportionate to potential reward; HTA decision-making framework taking into account the specific characteristics of value added medicines; eligibility for early HTA dialogues; Policy changes from pricing perspective should encompass: tenders/procurement policies allowing differentiation from generic medicines; eligibility for early entry agreement; non-systematic implementation of external and internal reference pricing policies; recognition of indication-specific pricing. At the same time, the pharmaceutical industry should engage all the stakeholders (patients, healthcare providers, HTA bodies/payers) in early dialogues to identify their expectations and to ensure the developed value added medicines address their needs.
Value added medicines: what value repurposed medicines might bring to society?
Toumi, Mondher; Rémuzat, Cécile
2017-01-01
ABSTRACT Background & objectives: Despite the wide interest surrounding drug repurposing, no common terminology has been yet agreed for these products and their full potential value is not always recognised and rewarded, creating a disincentive for further development. The objectives of the present study were to assess from a wide perspective which value drug repurposing might bring to society, but also to identify key obstacles for adoption of these medicines and to discuss policy recommendations. Methods: A preliminary comprehensive search was conducted to assess how the concept of drug repurposing was described in the literature. Following completion of the literature review, a primary research was conducted to get perspective of various stakeholders across EU member states on drug repurposing (healthcare professionals, regulatory authorities and Health Technology Assessment (HTA) bodies/payers, patients, and representatives of the pharmaceutical industry developing medicines in this field). Ad hoc literature review was performed to illustrate, when appropriate, statements of the various stakeholders. Results: Various nomenclatures have been used to describe the concept of drug repurposing in the literature, with more or less broad definitions either based on outcomes, processes, or being a mix of both. In this context, Medicines for Europe (http://www.medicinesforeurope.com/value-added-medicines/) established one single terminology for these medicines, known as value added medicines, defined as ‘medicines based on known molecules that address healthcare needs and deliver relevant improvements for patients, healthcare professionals and/or payers’. Stakeholder interviews highlighted three main potential benefits for value added medicines: (1) to address a number of medicine-related healthcare inefficiencies related to irrational use of medicines, non-availability of appropriate treatment options, shortage of mature products, geographical inequity in medicine access; (2) to improve healthcare system efficiency; and (3) to contribute to sustainability of healthcare systems through economic advantages. Current HTA framework, generic stigma, and pricing rules, such as internal reference pricing or tendering processes in place in some countries, were reported as the current key hurdles preventing the full recognition of value added medicines’ benefits, discouraging manufacturers from bringing such products to the market. Discussion & conclusions: There is currently a gap between increasing regulatory authority interest in capturing value added medicines’ benefits and the resistance of HTA bodies/payers, who tend to ignore this important segment of the pharmaceutical field. This situation calls for policy changes to foster appropriate incentives to enhance value recognition of value added medicines and deliver the expected benefit to society. Policy changes from HTA perspective should include: absence of any legislative barriers preventing companies from pursuing HTA; HTA requirements proportionate to potential reward; HTA decision-making framework taking into account the specific characteristics of value added medicines; eligibility for early HTA dialogues; Policy changes from pricing perspective should encompass: tenders/procurement policies allowing differentiation from generic medicines; eligibility for early entry agreement; non-systematic implementation of external and internal reference pricing policies; recognition of indication-specific pricing. At the same time, the pharmaceutical industry should engage all the stakeholders (patients, healthcare providers, HTA bodies/payers) in early dialogues to identify their expectations and to ensure the developed value added medicines address their needs. PMID:28265347
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
...-1011 series airplanes. AD 2005-15-01 required repetitive inspections to detect corrosion or fatigue... threshold required by the AD 2005-15-01. We are issuing this AD to prevent corrosion or fatigue cracking of... threshold required by AD 2005-15-01. We are issuing this AD to prevent corrosion or fatigue cracking of...
75 FR 15321 - Airworthiness Directives; Rolls-Royce plc RB211-Trent 800 Series Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
... Directives; Rolls-Royce plc RB211-Trent 800 Series Turbofan Engines AGENCY: Federal Aviation Administration... Rolls-Royce plc: Amendment 39-16239. Docket No. FAA-2009- 1004; Directorate Identifier 2009-NE-36-AD.... Applicability (c) This AD applies to Rolls-Royce plc models RB211-Trent 875- 17, Trent 877-17, Trent 884-17...
77 FR 15939 - Airworthiness Directives; Pratt & Whitney Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-19
... Airworthiness Directives; Pratt & Whitney Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... & Whitney (PW) JT9D series turbofan engines. That AD currently requires revisions to the Airworthiness..., -7R4E, - 7R4E1, -7R4E4, -7R4G2, and -7R4H1 series turbofan engines. (d) Unsafe Condition This AD results...
Tree-ring-width-based PDSI reconstruction for central Inner Mongolia, China over the past 333 years
NASA Astrophysics Data System (ADS)
Liu, Yu; Zhang, Xinjia; Song, Huiming; Cai, Qiufang; Li, Qiang; Zhao, Boyang; Liu, Han; Mei, Ruochen
2017-02-01
A tree-ring-width chronology was developed from Pinus tabulaeformis aged up to 333 years from central Inner Mongolia, China. The chronology was significantly correlated with the local Palmer Drought Severity Index (PDSI). We therefore reconstructed the first PDSI reconstruction from March to June based on the local tree ring data from 1680 to 2012 AD. The reconstruction explained 40.7 % of the variance (39.7 % after adjusted the degrees of freedom) of the actual PDSI during the calibration period (1951-2012 AD). The reconstructed PDSI series captured the severe drought event of the late 1920s, which occurred extensively in northern China. Running variance analyses indicated that the variability of drought increased sharply after 1960, indicating more drought years, which may imply anthropogenic related global warming effects in the region. In the entire reconstruction, there were five dry periods: 1730-1814 AD, 1849-1869 AD, 1886-1942 AD (including severe drought in late 1920s), 1963-1978 AD and 2004-2007 AD; and five wet periods: 1685-1729 AD, 1815-1848 AD, 1870-1885 AD, 1943-1962 AD and 1979-2003 AD. Conditions turned dry after 2003 AD, and the PDSI from March to June (PDSI36) captured many interannual extreme drought events since then, such as 2005-2008 AD. The reconstruction is comparable to other tree-ring-width-based PDSI series from the neighboring regions, indicating that our reconstruction has good regional representativeness. Significant relationships were found between our PDSI reconstruction and the solar radiation cycle and the sun spot cycle, North Atlantic Oscillation, the El Niño-Southern Oscillation, as well as the Pacific Decadal Oscillation. Power spectral analyses detected 147.0-, 128.2-, 46.5-, 6.5-, 6.3-, 2.6-, 2.2- and 2.0-year quasi-cycles in the reconstructed series.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
78 FR 79333 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
...We propose to supersede airworthiness directive (AD) 2000-12- 12, for certain Airbus Model A300, A300-600, and A310 series airplanes. AD 2000-12-12 currently requires inspecting to detect cracks in the lower spar axis of the nacelle pylon between ribs 9 and 10, and repair if necessary. AD 2000-12-12 also provides for optional modification of the pylon, which terminates the inspections for Model A300 series airplanes. Since we issued AD 2000-12-12, we have received reports of cracking of the lower pylon spar after accomplishing the existing modification and have determined that shorter initial and repetitive inspection compliance times are necessary to address the identified unsafe condition. This proposed AD would reduce the initial and repetitive inspection compliance times. We are proposing this AD to detect and correct fatigue cracking, which could result in reduced structural integrity of the lower spar of the nacelle pylon.
Relations between elliptic multiple zeta values and a special derivation algebra
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver
2016-04-01
We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.
Fractal dynamics of heartbeat time series of young persons with metabolic syndrome
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Alonso-Martínez, A.; Ramírez-Hernández, L.; Martínez-Hernández, G.
2012-10-01
Many physiological systems have been in recent years quantitatively characterized using fractal analysis. We applied it to study heart variability of young subjects with metabolic syndrome (MS); we examined the RR time series (time between two R waves in ECG) with the detrended fluctuation analysis (DFA) method, the Higuchi's fractal dimension method and the multifractal analysis to detect the possible presence of heart problems. The results show that although the young persons have MS, the majority do not present alterations in the heart dynamics. However, there were cases where the fractal parameter values differed significantly from the healthy people values.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-01
... determine that a series has become active intraday if (i) The series trades at any options exchange; (ii... customer in that series. If a series becomes active intraday, the Exchange will immediately disseminate...
Computation of iodine species concentrations in water
NASA Technical Reports Server (NTRS)
Schultz, John R.; Mudgett, Paul D.; Flanagan, David T.; Sauer, Richard L.
1994-01-01
During an evaluation of the use of iodine as a water disinfectant and the development of methods for measuring various iodine species in water onboard Space Freedom, it became necessary to compute the concentration of the various species based on equilibrium principles alone. Of particular concern was the case when various amounts of iodine, iodide, strong acid, and strong base are added to water. Such solutions can be used to evaluate the performance of various monitoring methods being considered. The authors of this paper present an overview of aqueous iodine chemistry, a set of nonlinear equations which can be used to model the above case, and a computer program for solving this system of equations using the Newton-Raphson method. The program was validated by comparing results over a range of concentrations and pH values with those previously presented by Gottardi for a given pH. Use of this program indicated that there are multiple roots to many cases and selecting an appropriate initial guess is important. Comparison of program results with laboratory results for the case when only iodine is added to water indicates the program gives high pH values for the iodine concentrations normally used for water disinfection. Extending the model to include the effects of iodate formation results in the computer pH values being closer to those observed, but the model with iodate does not agree well for the case in which base is added in addition to iodine to raise the pH. Potential explanations include failure to obtain equilibrium conditions in the lab, inaccuracies in published values for the equilibrium constants, and inadequate model of iodine chemistry and/or the lack of adequate analytical methods for measuring the various iodine species in water.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
2012-10-29
longer by adding an additional mid section8. Radio Frequency Identification (RFID): This established technology is being considered for use to...MONITOR’S ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited...Introduction ............................................................................................ 11 3 A Taxonomy of Methods for Valuing
2012-10-29
modules and their interfaces are designed such that the ship can be made longer by adding an additional mid section8. Radio Frequency ...MONITOR’S ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution... 11 3 A Taxonomy of Methods for Valuing Flexibility
10 CFR 430.24 - Units to be tested.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the method includes an ARM/simulation adjustment factor(s), determine the value(s) of the factors(s... process. (v) If request for approval is for an updated ARM, manufacturers must identify modifications made to the ARM since the last submittal, including any ARM/simulation adjustment factor(s) added since...
Analysis of alterations in white matter integrity of adult patients with comitant exotropia.
Li, Dan; Li, Shenghong; Zeng, Xianjun
2018-05-01
Objective This study was performed to investigate structural abnormalities of the white matter in patients with comitant exotropia using the tract-based spatial statistics (TBSS) method. Methods Diffusion tensor imaging data from magnetic resonance images of the brain were collected from 20 patients with comitant exotropia and 20 age- and sex-matched healthy controls. The FMRIB Software Library was used to compute the diffusion measures, including fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD). These measures were obtained using voxel-wise statistics with threshold-free cluster enhancement. Results The FA values in the right inferior fronto-occipital fasciculus (IFO) and right inferior longitudinal fasciculus were significantly higher and the RD values in the bilateral IFO, forceps minor, left anterior corona radiata, and left anterior thalamic radiation were significantly lower in the comitant exotropia group than in the healthy controls. No significant differences in the MD or AD values were found between the two groups. Conclusions Alterations in FA and RD values may indicate the underlying neuropathologic mechanism of comitant exotropia. The TBSS method can be a useful tool to investigate neuronal tract participation in patients with this disease.
Yuan, Xing
2016-06-22
This is the second paper of a two-part series on introducing an experimental seasonal hydrological forecasting system over the Yellow River basin in northern China. While the natural hydrological predictability in terms of initial hydrological conditions (ICs) is investigated in a companion paper, the added value from eight North American Multimodel Ensemble (NMME) climate forecast models with a grand ensemble of 99 members is assessed in this paper, with an implicit consideration of human-induced uncertainty in the hydrological models through a post-processing procedure. The forecast skill in terms of anomaly correlation (AC) for 2 m air temperature and precipitation does not necessarily decrease overmore » leads but is dependent on the target month due to a strong seasonality for the climate over the Yellow River basin. As there is more diversity in the model performance for the temperature forecasts than the precipitation forecasts, the grand NMME ensemble mean forecast has consistently higher skill than the best single model up to 6 months for the temperature but up to 2 months for the precipitation. The NMME climate predictions are downscaled to drive the variable infiltration capacity (VIC) land surface hydrological model and a global routing model regionalized over the Yellow River basin to produce forecasts of soil moisture, runoff and streamflow. And the NMME/VIC forecasts are compared with the Ensemble Streamflow Prediction method (ESP/VIC) through 6-month hindcast experiments for each calendar month during 1982–2010. As verified by the VIC offline simulations, the NMME/VIC is comparable to the ESP/VIC for the soil moisture forecasts, and the former has higher skill than the latter only for the forecasts at long leads and for those initialized in the rainy season. The forecast skill for runoff is lower for both forecast approaches, but the added value from NMME/VIC is more obvious, with an increase of the average AC by 0.08–0.2. To compare with the observed streamflow, both the hindcasts from NMME/VIC and ESP/VIC are post-processed through a linear regression model fitted by using VIC offline-simulated streamflow. The post-processed NMME/VIC reduces the root mean squared error (RMSE) from the post-processed ESP/VIC by 5–15 %. And the reduction occurs mostly during the transition from wet to dry seasons. As a result, with the consideration of the uncertainty in the hydrological models, the added value from climate forecast models is decreased especially at short leads, suggesting the necessity of improving the large-scale hydrological models in human-intervened river basins.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Xing
This is the second paper of a two-part series on introducing an experimental seasonal hydrological forecasting system over the Yellow River basin in northern China. While the natural hydrological predictability in terms of initial hydrological conditions (ICs) is investigated in a companion paper, the added value from eight North American Multimodel Ensemble (NMME) climate forecast models with a grand ensemble of 99 members is assessed in this paper, with an implicit consideration of human-induced uncertainty in the hydrological models through a post-processing procedure. The forecast skill in terms of anomaly correlation (AC) for 2 m air temperature and precipitation does not necessarily decrease overmore » leads but is dependent on the target month due to a strong seasonality for the climate over the Yellow River basin. As there is more diversity in the model performance for the temperature forecasts than the precipitation forecasts, the grand NMME ensemble mean forecast has consistently higher skill than the best single model up to 6 months for the temperature but up to 2 months for the precipitation. The NMME climate predictions are downscaled to drive the variable infiltration capacity (VIC) land surface hydrological model and a global routing model regionalized over the Yellow River basin to produce forecasts of soil moisture, runoff and streamflow. And the NMME/VIC forecasts are compared with the Ensemble Streamflow Prediction method (ESP/VIC) through 6-month hindcast experiments for each calendar month during 1982–2010. As verified by the VIC offline simulations, the NMME/VIC is comparable to the ESP/VIC for the soil moisture forecasts, and the former has higher skill than the latter only for the forecasts at long leads and for those initialized in the rainy season. The forecast skill for runoff is lower for both forecast approaches, but the added value from NMME/VIC is more obvious, with an increase of the average AC by 0.08–0.2. To compare with the observed streamflow, both the hindcasts from NMME/VIC and ESP/VIC are post-processed through a linear regression model fitted by using VIC offline-simulated streamflow. The post-processed NMME/VIC reduces the root mean squared error (RMSE) from the post-processed ESP/VIC by 5–15 %. And the reduction occurs mostly during the transition from wet to dry seasons. As a result, with the consideration of the uncertainty in the hydrological models, the added value from climate forecast models is decreased especially at short leads, suggesting the necessity of improving the large-scale hydrological models in human-intervened river basins.« less
School system evaluation by value added analysis under endogeneity.
Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien
2014-01-01
Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.
Comparison of time-series registration methods in breast dynamic infrared imaging
NASA Astrophysics Data System (ADS)
Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.
2015-03-01
Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-01
... states, in part, ``The Exchange may determine that a series has become active intraday if (i) the series... receives a request for quote from a customer in that series. If a series becomes active intraday, the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
KL Gaustad; DD Turner
2007-09-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) microwave radiometer (MWR) RETrievel (MWRRET) Value-Added Product (VAP) algorithm. This algorithm utilizes complimentary physical and statistical retrieval methods and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
Xavier, Amália Luísa Pedrosa; Adarme, Oscar Fernando Herrera; Furtado, Laís Milagres; Ferreira, Gabriel Max Dias; da Silva, Luis Henrique Mendes; Gil, Laurent Frédéric; Gurgel, Leandro Vinícius Alves
2018-04-15
In the second part of this series of studies, the monocomponent adsorption of Cu 2+ , Co 2+ and Ni 2+ onto STA adsorbent in a fixed-bed column was investigated and optimized using a 2 2 central composite design. The process variables studied were: initial metal ion concentration and spatial time, and the optimized responses were: adsorption capacity of the bed (Q max ), efficiency of the adsorption process (EAP), and effective use of the bed (H). The higher Q max for Cu 2+ , Co 2+ and Ni 2+ were 1.060, 0.800 and 1.029 mmol/g, respectively. The breakthrough curves were modeled by the original Thomas and Bohart-Adams models. The changes in enthalpy (Δ ads H°) of adsorption of the metal ions onto STA were determined by isothermal titration calorimetry (ITC). The values of Δ ads H° were in the range of 3.0-6.8 kJ/mol, suggesting that the adsorption process involved physisorption. Desorption (E des ) and re-adsorption (E re-ads ) of metal ions from the STA adsorbent were also investigated in batch mode, and the optimum conditions were applied for three cycles of adsorption/desorption in a fixed bed column. For these cycles, the lowest values of E des and E re-ads were 95 and 92.3%, respectively, showing that STA is a promising candidate for real applications on a large scale. Copyright © 2018 Elsevier Inc. All rights reserved.
Hinson, Jeremiah S.; Mistry, Binoy; Hsieh, Yu-Hsiang; Risko, Nicholas; Scordino, David; Paziana, Karolina; Peterson, Susan; Omron, Rodney
2017-01-01
Introduction Our goal was to reduce ordering of coagulation studies in the emergency department (ED) that have no added value for patients presenting with chest pain. We hypothesized this could be achieved via implementation of a stopgap measure in the electronic medical record (EMR). Methods We used a pre and post quasi-experimental study design to evaluate the impact of an EMR-based intervention on coagulation study ordering for patients with chest pain. A simple interactive prompt was incorporated into the EMR of our ED that required clinicians to indicate whether patients were on anticoagulation therapy prior to completion of orders for coagulation studies. Coagulation order frequency was measured via detailed review of randomly sampled encounters during two-month periods before and after intervention. We classified existing orders as clinically indicated or non-value added. Order frequencies were calculated as percentages, and we assessed differences between groups by chi-square analysis. Results Pre-intervention, 73.8% (76/103) of patients with chest pain had coagulation studies ordered, of which 67.1% (51/76) were non-value added. Post-intervention, 38.5% (40/104) of patients with chest pain had coagulation studies ordered, of which 60% (24/40) were non-value added. There was an absolute reduction of 35.3% (95% confidence interval [CI]: 22.7%, 48.0%) in the total ordering of coagulation studies and 26.4% (95% CI: 13.8%, 39.0%) in non-value added order placement. Conclusion Simple EMR-based interactive prompts can serve as effective deterrents to indiscriminate ordering of diagnostic studies. PMID:28210363
Chemical and neuropathological analyses of an Alzheimer’s disease patient treated with solanezumab
Roher, Alex E; Maarouf, Chera L; Kokjohn, Tyler A; Belden, Christine; Serrano, Geidy; Sabbagh, Marwan S; Beach, Thomas G
2016-01-01
Introduction: Based on the amyloid cascade hypothesis of Alzheimer’s disease (AD) pathogenesis, a series of clinical trials involving immunotherapies have been undertaken including infusion with the IgG1 monoclonal anti-Aβ antibody solanezumab directed against the middle of the soluble Aβ peptide. In this report, we give an account of the clinical history, psychometric testing, gross and microscopic neuropathology as well as immunochemical quantitation of soluble and insoluble Aβ peptides and other proteins of interest related to AD pathophysiology in a patient treated with solanezumab. Materials and Methods: The solanezumab-treated AD case (SOLA-AD) was compared to non-demented control (NDC, n = 5) and non-immunized AD (NI-AD, n = 5) subjects. Brain sections were stained with H&E, Thioflavine-S, Campbell-Switzer and Gallyas methods. ELISA and Western blots were used for quantification of proteins of interest. Results: The SOLA-AD subject’s neuropathology and biochemistry differed sharply from the NDC and NI-AD groups. The SOLA-AD case had copious numbers of amyloid laden blood vessels in all areas of the cerebral cortex, from leptomeningeal perforating arteries to arteriolar deposits which attained the cerebral amyloid angiopathy (CAA) maximum score of 12. In contrast, the maximum CAA for the NI-AD cases averaged a total of 3.6, while the NDC cases only reached 0.75. The SOLA-AD subject had 4.4-fold more soluble Aβ40 and 5.6-fold more insoluble Aβ40 in the frontal lobe compared to NI-AD cases. In the temporal lobe of the SOLA-AD case, the soluble Aβ40 was 80-fold increased, and the insoluble Aβ40 was 13-fold more abundant compared to the non-immunized AD cases. Both soluble and insoluble Aβ42 levels were not dramatically different between the SOLA-AD and NI-AD cohort. Discussion: Solanezumab immunotherapy provided no apparent relief in the clinical evolution of dementia in this particular AD patient, since there was a continuous cognitive deterioration and full expression of amyloid deposition and neuropathology. PMID:27725918
Lutz, Michael W.; Saul, Robert; Linnertz, Colton; Glenn, Omolara-Chinue; Roses, Allen D.; Chiba-Falek, Ornit
2015-01-01
INTRODUCTION We recently showed that tagging-SNPs across the SNCA locus were significantly associated with increased risk for LB pathology in AD cases. However, the actual genetic variant(s) that underlie the observed associations remain elusive. METHODS We used a bioinformatics algorithm to catalogue Structural-Variants in a region of SNCA-intron4, followed by phased-sequencing. We performed a genetic-association analysis in autopsy series of LBV/AD cases compared with AD-only controls. We investigated the biological functions by expression analysis using temporal-cortex samples. RESULTS We identified four distinct haplotypes within a highly-polymorphic-low-complexity CT-rich region. We showed that a specific haplotype conferred risk to develop LBV/AD. We demonstrated that the CT-rich site acts as an enhancer element, where the risk haplotype was significantly associated with elevated levels of SNCA-mRNA. DISCUSSION We have discovered a novel haplotype in a CT-rich region in SNCA that contributes to LB pathology in AD patients, possibly via cis-regulation of the gene expression. PMID:26079410
NASA Astrophysics Data System (ADS)
Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping
2016-10-01
Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-24
... Airworthiness Directives; Rolls-Royce plc RB211 Trent 700 and Trent 800 Series Turbofan Engines AGENCY: Federal...-20-11 Rolls-Royce plc: Amendment 39-16446. Docket No. FAA- 2010-0364; Directorate Identifier 2009-NE.... Affected ADs (b) None. Applicability (c) This AD applies to Rolls-Royce plc model (RR) RB211 Trent 768-60...
77 FR 55163 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... directive (AD) for certain Airbus Model A330-200, A330-300, A340-200, and A340- 300 series airplanes; and... Model A330-200, A330- 200 Freighter, A330-300, A340-200, and A340-300 series airplanes; and Model A340... measure, EASA issued AD 2011-0040 to require a one-time [detailed] inspection of the MLG (all types of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... Airworthiness Directives; The Boeing Company Model 737-100, -200, -200C, -300, -400, and -500 Series Airplanes..., -200, -200C, -300, -400, and - 500 series airplanes. That AD currently requires a one-time inspection... 16211, March 31, 2006). The existing AD applies to all Model 737-100, -200, -200C, -300, -400, and -500...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... is separated from the engine. This AD was prompted by seven reports of uncontained failures of LPT... engine failure and damage to the airplane. DATES: This AD is effective September 26, 2011. ADDRESSES: You... reports of uncontained failures of LPT rotor stage 3 disks and eight reports of cracked LPT rotor stage 3...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sailer, Anna M., E-mail: anni.sailer@mumc.nl; Haan, Michiel W. de, E-mail: m.de.haan@mumc.nl; Graaf, Rick de, E-mail: r.de.graaf@mumc.nl
PurposeThis study was designed to evaluate the feasibility of endovascular guidance by means of live fluoroscopy fusion with magnetic resonance angiography (MRA) and computed tomography angiography (CTA).MethodsFusion guidance was evaluated in 20 endovascular peripheral artery interventions in 17 patients. Fifteen patients had received preinterventional diagnostic MRA and two patients had undergone CTA. Time for fluoroscopy with MRA/CTA coregistration was recorded. Feasibility of fusion guidance was evaluated according to the following criteria: for every procedure the executing interventional radiologists recorded whether 3D road-mapping provided added value (yes vs. no) and whether PTA and/or stenting could be performed relying on the fusionmore » road-map without need for diagnostic contrast-enhanced angiogram series (CEAS) (yes vs. no). Precision of the fusion road-map was evaluated by recording maximum differences between the position of the vasculature on the virtual CTA/MRA images and conventional angiography.ResultsAverage time needed for image coregistration was 5 ± 2 min. Three-dimensional road-map added value was experienced in 15 procedures in 12 patients. In half of the patients (8/17), intervention was performed relying on the fusion road-map only, without diagnostic CEAS. In two patients, MRA roadmap showed a false-positive lesion. Excluding three patients with inordinate movements, mean difference in position of vasculature on angiography and MRA/CTA road-map was 1.86 ± 0.95 mm, implying that approximately 95 % of differences were between 0 and 3.72 mm (2 ± 1.96 standard deviation).ConclusionsFluoroscopy with MRA/CTA fusion guidance for peripheral artery interventions is feasible. By reducing the number of CEAS, this technology may contribute to enhance procedural safety.« less
Tan, Christine L.; Hassali, Mohamed A.; Saleem, Fahad; Shafie, Asrul A.; Aljadhey, Hisham; Gan, Vincent B.
2015-01-01
Objective: (i) To develop the Pharmacy Value-Added Services Questionnaire (PVASQ) using emerging themes generated from interviews. (ii) To establish reliability and validity of questionnaire instrument. Methods: Using an extended Theory of Planned Behavior as the theoretical model, face-to-face interviews generated salient beliefs of pharmacy value-added services. The PVASQ was constructed initially in English incorporating important themes and later translated into the Malay language with forward and backward translation. Intention (INT) to adopt pharmacy value-added services is predicted by attitudes (ATT), subjective norms (SN), perceived behavioral control (PBC), knowledge and expectations. Using a 7-point Likert-type scale and a dichotomous scale, test-retest reliability (N=25) was assessed by administrating the questionnaire instrument twice at an interval of one week apart. Internal consistency was measured by Cronbach’s alpha and construct validity between two administrations was assessed using the kappa statistic and the intraclass correlation coefficient (ICC). Confirmatory Factor Analysis, CFA (N=410) was conducted to assess construct validity of the PVASQ. Results: The kappa coefficients indicate a moderate to almost perfect strength of agreement between test and retest. The ICC for all scales tested for intra-rater (test-retest) reliability was good. The overall Cronbach’ s alpha (N=25) is 0.912 and 0.908 for the two time points. The result of CFA (N=410) showed most items loaded strongly and correctly into corresponding factors. Only one item was eliminated. Conclusions: This study is the first to develop and establish the reliability and validity of the Pharmacy Value-Added Services Questionnaire instrument using the Theory of Planned Behavior as the theoretical model. The translated Malay language version of PVASQ is reliable and valid to predict Malaysian patients’ intention to adopt pharmacy value-added services to collect partial medicine supply. PMID:26445622
PDF added value of a high resolution climate simulation for precipitation
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Cardoso, Rita M.
2015-04-01
General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.
Workshop: Valuing and Managing Ecosystems: Economic Research Sponsored by NSF/EPA (1998)
Materials from first workshop in series of Environmental Policy and Economics Workshops. Focus on valuing and managing ecosystems, with papers on use of stated preference methods, examining markets for diverse biologic resources and conservation measures.
Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes
2016-12-01
The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.
Implementing Value-Added Measures of School Effectiveness: Getting the Incentives Right.
ERIC Educational Resources Information Center
Ladd, Helen F.; Walsh, Randall P.
2002-01-01
Evaluates value-added approach to measuring school effectiveness in North and South Carolina. Finds that value-added approach favors high-achievement schools, with large percentage of students from high-SES backgrounds. Discusses statistical problems in measuring value added. Concludes teachers' and administrators' avoidance of low-achievement,…
Svelle, Stian; Tuma, Christian; Rozanska, Xavier; Kerber, Torsten; Sauer, Joachim
2009-01-21
The methylation of ethene, propene, and t-2-butene by methanol over the acidic microporous H-ZSM-5 catalyst has been investigated by a range of computational methods. Density functional theory (DFT) with periodic boundary conditions (PBE functional) fails to describe the experimentally determined decrease of apparent energy barriers with the alkene size due to inadequate description of dispersion forces. Adding a damped dispersion term expressed as a parametrized sum over atom pair C(6) contributions leads to uniformly underestimated barriers due to self-interaction errors. A hybrid MP2:DFT scheme is presented that combines MP2 energy calculations on a series of cluster models of increasing size with periodic DFT calculations, which allows extrapolation to the periodic MP2 limit. Additionally, errors caused by the use of finite basis sets, contributions of higher order correlation effects, zero-point vibrational energy, and thermal contributions to the enthalpy were evaluated and added to the "periodic" MP2 estimate. This multistep approach leads to enthalpy barriers at 623 K of 104, 77, and 48 kJ/mol for ethene, propene, and t-2-butene, respectively, which deviate from the experimentally measured values by 0, +13, and +8 kJ/mol. Hence, enthalpy barriers can be calculated with near chemical accuracy, which constitutes significant progress in the quantum chemical modeling of reactions in heterogeneous catalysis in general and microporous zeolites in particular.
NASA Astrophysics Data System (ADS)
Roul, Pradip; Warbhe, Ujwal
2017-08-01
The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).
An algorithm of Saxena-Easo on fuzzy time series forecasting
NASA Astrophysics Data System (ADS)
Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.
2018-04-01
This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.
Application of lean six sigma to waste minimization in cigarette paper industry
NASA Astrophysics Data System (ADS)
Syahputri, K.; Sari, R. M.; Anizar; Tarigan, I. R.; Siregar, I.
2018-02-01
The cigarette paper industry is one of the industry that is always experiencing increasing demand from consumers. Consumer expectations for the products produced also increased both in terms of quality and quantity. The company continuously improves the quality of its products by trying to minimize nonconformity, waste and improve the efficiency of the whole production process of the company. In this cigarette industry, there is a disability whose value is above the company’s defect tolerance that is 10% of the production amount per month. Another problem also occurs in the production time is too long due to the many activities that are not value added (non value added activities) on the production floor. To overcome this problem, it is necessary to improve the production process of cigarette paper and minimize production time by reducing non value added activities. Repairs done with Lean Six Sigma. Lean Six Sigma is a combination of Lean and Six Sigma concept with DMAIC method (Define, Measure, Analyze, Improve, Control). With this Lean approach, obtained total production time of 1479.13 minutes proposal with cycle efficiency process increased by 12.64%.
Akamatsu, Fumikazu; Oe, Takaaki; Hashiguchi, Tomokazu; Hisatsune, Yuri; Kawao, Takafumi; Fujii, Tsutomu
2017-08-01
Japanese apricot liqueur manufacturers are required to control the quality and authenticity of their liqueur products. Citric acid made from corn is the main acidulant used in commercial liqueurs. In this study, we conducted spiking experiments and carbon and hydrogen stable isotope analyses to detect exogenous citric acid used as an acidulant in Japanese apricot liqueurs. Our results showed that the δ 13 C values detected exogenous citric acid originating from C 4 plants but not from C 3 plants. The δ 2 H values of citric acid decreased as the amount of citric acid added increased, whether the citric acid originated from C 3 or C 4 plants. Commercial liqueurs with declared added acidulant provided higher δ 13 C values and lower δ 2 H values than did authentic liqueurs and commercial liqueurs with no declared added acidulant. Carbon and hydrogen stable isotope analyses are suitable as routine methods for detecting exogenous citric acid in Japanese apricot liqueur. Copyright © 2017 Elsevier Ltd. All rights reserved.
State Taxation of Mineral Deposits and Production. Rural Development Research Report No. 2.
ERIC Educational Resources Information Center
Stinson, Thomas F.
Alternative methods for taxing the mineral industry at the State level include four types of taxes: the ad valorem tax, severance tax, gross production tax, and net production tax. An ad valorem tax is a property tax levied on a mineral deposit's assessed value and due whether the deposit is being worked or not. The severance tax is usually an…
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
Hirbod, Kimia; Jalili-baleh, Leili; Nadri, Hamid; ebrahimi, Seyed esmaeil Sadat; Moradi, Alireza; Pakseresht, Bahar; Foroumadi, Alireza; Shafiee, Abbas; Khoobi, Mehdi
2017-01-01
Objective(s): To investigate the efficiency of a novel series of coumarin derivatives bearing benzoheterocycle moiety as novel cholinesterase inhibitors. Materials and Methods: Different 7-hydroxycoumarin derivatives were synthesized via Pechmann or Knoevenagel condensation and conjugated to different benzoheterocycle (8-hydroxyquinoline, 2-mercaptobenzoxazole or 2-mercaptobenzimidazole) using dibromoalkanes 3a-m: Final compounds were evaluated against acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE) by Ellman’s method. Kinetic study of AChE inhibition and ligand-protein docking simulation were also carried out for the most potent compound 3b. Results: Some of the compounds revealed potent and selective activity against AChE. Compound 3b containing the quinoline group showed the best activity with an IC50 value of 8.80 μM against AChE. Kinetic study of AChE inhibition revealed the mixed-type inhibition of the enzyme by compound 3b. Ligand-protein docking simulation also showed that the flexibility of the hydrophobic five carbons linker allows the quinoline ring to form π-π interaction with Trp279 in the PAS. Conclusion: We suggest these synthesized compounds could become potential leads for AChE inhibition and prevention of AD symptoms. PMID:28868119
Effective numerical method of spectral analysis of quantum graphs
NASA Astrophysics Data System (ADS)
Barrera-Figueroa, Víctor; Rabinovich, Vladimir S.
2017-05-01
We present in the paper an effective numerical method for the determination of the spectra of periodic metric graphs equipped by Schrödinger operators with real-valued periodic electric potentials as Hamiltonians and with Kirchhoff and Neumann conditions at the vertices. Our method is based on the spectral parameter power series method, which leads to a series representation of the dispersion equation, which is suitable for both analytical and numerical calculations. Several important examples demonstrate the effectiveness of our method for some periodic graphs of interest that possess potentials usually found in quantum mechanics.
Muir-Hunter, Susan W; Graham, Laura; Montero Odasso, Manuel
2015-08-01
To measure test-retest and interrater reliability of the Berg Balance Scale (BBS) in community-dwelling adults with mild to moderate Alzheimer disease (AD). Method : A sample of 15 adults (mean age 80.20 [SD 5.03] years) with AD performed three balance tests: the BBS, timed up-and-go test (TUG), and Functional Reach Test (FRT). Both relative reliability, using the intra-class correlation coefficient (ICC), and absolute reliability, using standard error of measurement (SEM) and minimal detectable change (MDC95) values, were calculated; Bland-Altman plots were constructed to evaluate inter-tester agreement. The test-retest interval was 1 week. Results : For the BBS, relative reliability values were 0.95 (95% CI, 0.85-0.98) for test-retest reliability and 0.72 (95% CI, 0.31-0.91) for interrater reliability; SEM was 6.01 points and MDC95 was 16.66 points; and interrater agreement was 16.62 points. The BBS performed better in test-retest reliability than the TUG and FRT, tests with established reliability in AD. Between 33% and 50% of participants required cueing beyond standardized instructions because they were unable to remember test instructions. Conclusions : The BBS achieved relative reliability values that support its clinical utility, but MDC95 and agreement values indicate the scale has performance limitations in AD. Further research to optimize balance assessment for people with AD is required.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Spectral methods in general relativity and large Randall-Sundrum II black holes
NASA Astrophysics Data System (ADS)
Abdolrahimi, Shohreh; Cattoën, Céline; Page, Don N.; \\\\; Yaghoobpour-Tari, Shima
2013-06-01
Using a novel numerical spectral method, we have found solutions for large static Randall-Sundrum II (RSII) black holes by perturbing a numerical AdS5-CFT4 solution to the Einstein equation with a negative cosmological constant Λ that is asymptotically conformal to the Schwarzschild metric. We used a numerical spectral method independent of the Ricci-DeTurck-flow method used by Figueras, Lucietti, and Wiseman for a similar numerical solution. We have compared our black-hole solution to the one Figueras and Wiseman have derived by perturbing their numerical AdS5-CFT4 solution, showing that our solution agrees closely with theirs. We have obtained a closed-form approximation to the metric of the black hole on the brane. We have also deduced the new results that to first order in 1/(-ΛM2), the Hawking temperature and entropy of an RSII static black hole have the same values as the Schwarzschild metric with the same mass, but the horizon area is increased by about 4.7/(-Λ).
Estimating the probability for major gene Alzheimer disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrer, L.A.; Cupples, L.A.
1994-02-01
Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted riskmore » estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.« less
Ambulatory blood pressure and heart rate during shuttle flight, entry and landing
NASA Technical Reports Server (NTRS)
Thornton, W.; Moore, T. P.; Uri, J.
1993-01-01
Ambulatory blood pressures (BP) and heart rates (HR) were recorded on a series of early Shuttle flights during preflight and pre-entry, entry, landing and egress. There were no significant differences between flight and preflight values during routine activity. Systolic blood pressure was slightly elevated in the deorbit period and systolic and diastolic blood pressure and heart rates were all elevated with onset of gravitoinertial loads and remained so through egress. Two of seven subjects had orthostatic problems in egress but their data did not show significant differences from others except in heart rate. Comparison of this data to that from recent studies show even larger increase in HR/BP values during current deorbit and entry phases which is consistent with increased heat and weight loads imposed by added survival gear. Both value and limitations of ambulatory heart rate/blood pressure data in this situation are demonstrated.
Mahurin, Shannon M.; Fulvio, Pasquale F.; Hillesheim, Patrick C.; ...
2014-07-31
Postcombustion CO 2 capture has become a key component of greenhouse-gas reduction as anthropogenic emissions continue to impact the environment. In this paper, we report a one-step synthesis of porous carbon materials using a series of task-specific ionic liquids for the adsorption of CO 2. By varying the structure of the ionic liquid precursor, we were able to control pore architecture and surface functional groups of the carbon materials in this one-step synthesis process leading to adsorbents with high CO 2 sorption capacities (up to 4.067 mmol g -1) at 0 °C and 1 bar. Finally, added nitrogen functional groupsmore » led to high CO 2/N 2 adsorption-selectivity values ranging from 20 to 37 whereas simultaneously the interaction energy was enhanced relative to carbon materials with no added nitrogen.« less
Unveiling signatures of interdecadal climate changes by Hilbert analysis
NASA Astrophysics Data System (ADS)
Zappalà, Dario; Barreiro, Marcelo; Masoller, Cristina
2017-04-01
A recent study demonstrated that, in a class of networks of oscillators, the optimal network reconstruction from dynamics is obtained when the similarity analysis is performed not on the original dynamical time series, but on transformed series obtained by Hilbert transform. [1] That motivated us to use Hilbert transform to study another kind of (in a broad sense) "oscillating" series, such as the series of temperature. Actually, we found that Hilbert analysis of SAT (Surface Air Temperature) time series uncovers meaningful information about climate and is therefore a promising tool for the study of other climatological variables. [2] In this work we analysed a large dataset of SAT series, performing Hilbert transform and further analysis with the goal of finding signs of climate change during the analysed period. We used the publicly available ERA-Interim dataset, containing reanalysis data. [3] In particular, we worked on daily SAT time series, from year 1979 to 2015, in 16380 points arranged over a regular grid on the Earth surface. From each SAT time series we calculate the anomaly series and also, by using the Hilbert transform, we calculate the instantaneous amplitude and instantaneous frequency series. Our first approach is to calculate the relative variation: the difference between the average value on the last 10 years and the average value on the first 10 years, divided by the average value over all the analysed period. We did this calculations on our transformed series: frequency and amplitude, both with average values and standard deviation values. Furthermore, to have a comparison with an already known analysis methods, we did these same calculations on the anomaly series. We plotted these results as maps, where the colour of each site indicates the value of its relative variation. Finally, to gain insight in the interpretation of our results over real SAT data, we generated synthetic sinusoidal series with various levels of additive noise. By applying Hilbert analysis to the synthetic data, we uncovered a clear trend between mean amplitude and mean frequency: as the noise level grows, the amplitude increases while the frequency decreases. Research funded in part by AGAUR (Generalitat de Catalunya), EU LINC project (Grant No. 289447) and Spanish MINECO (FIS2015-66503-C3-2-P).
The Effect of Sub-Aperture in DRIA Framework Applied on Multi-Aspect PolSAR Data
NASA Astrophysics Data System (ADS)
Xue, Feiteng; Yin, Qiang; Lin, Yun; Hong, Wen
2016-08-01
Multi-aspect SAR is a new remote sensing technology, achieves consecutive data in large look angle as platform moves. Multi- aspect observation brings higher resolution and SNR to SAR picture. Multi-aspect PolSAR data can increase the accuracy of target identify and classification because it contains the 3-D polarimetric scattering properties.DRIA(detecting-removing-incoherent-adding)framework is a multi-aspect PolSAR data processing method. In this method, the anisotropic and isotropic scattering is separated by maximum- likelihood ratio test. The anisotropic scattering is removed to gain a removal series. The isotropic scattering is incoherent added to gain a high resolution picture. The removal series describes the anisotropic scattering property and is used in features extraction and classification.This article focuses on the effect brought by difference of sub-aperture numbers in anisotropic scattering detection and removal. The more sub-apertures are, the less look angle is. Artificial target has anisotropic scattering because of Bragg resonances. The increase of sub-aperture number brings more accurate observation in azimuth though the quality of each single image may loss. The accuracy of classification in agricultural fields is affected by the anisotropic scattering brought by Bragg resonances. The size of the sub-aperture has a significant effect in the removal result of Bragg resonances.
Evaluating Teachers: The Important Role of Value-Added
ERIC Educational Resources Information Center
Glazerman, Steven; Loeb, Susanna; Goldhaber, Dan; Staiger, Douglas; Raudenbush, Stephen; Whitehurst, Grover
2010-01-01
The evaluation of teachers based on the contribution they make to the learning of their students, "value-added", is an increasingly popular but controversial education reform policy. In this report, the authors highlight and try to clarify four areas of confusion about value-added. The first is between value-added information and the…
Selecting Value-Added Models for Postsecondary Institutional Assessment
ERIC Educational Resources Information Center
Steedle, Jeffrey T.
2012-01-01
Value-added scores from tests of college learning indicate how score gains compare to those expected from students of similar entering academic ability. Unfortunately, the choice of value-added model can impact results, and this makes it difficult to determine which results to trust. The research presented here demonstrates how value-added models…
Value Added in English Schools
ERIC Educational Resources Information Center
Ray, Andrew; McCormack, Tanya; Evans, Helen
2009-01-01
Value-added indicators are now a central part of school accountability in England, and value-added information is routinely used in school improvement at both the national and the local levels. This article describes the value-added models that are being used in the academic year 2007-8 by schools, parents, school inspectors, and other…
48 CFR 252.229-7006 - Value added tax exclusion (United Kingdom).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Value added tax exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value added tax exclusion (United Kingdom). As prescribed in 229.402-70(f), use the following clause: Value Added Tax Exclusion (United Kingdom) (JUN 1997...
48 CFR 252.229-7006 - Value added tax exclusion (United Kingdom).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Value added tax exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value added tax exclusion (United Kingdom). As prescribed in 229.402-70(f), use the following clause: Value Added Tax Exclusion (United Kingdom) (JUN 1997...
2 CFR 200.470 - Taxes (including Value Added Tax).
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Taxes (including Value Added Tax). 200.470... Cost § 200.470 Taxes (including Value Added Tax). (a) For states, local governments and Indian tribes... Federal government for the taxes, interest, and penalties. (c) Value Added Tax (VAT) Foreign taxes charged...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2012 CFR
2012-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... contributory value of capital improvements added during the term of the SAA will be deducted from the market... value added to the real property by the new or expanded portion of the original residence (if it added...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
The Learning Outcomes Race: The Value of Self-Reported Gains in Large Research Universities
ERIC Educational Resources Information Center
Douglass, John Aubrey; Thomson, Gregg; Zhao, Chun-Mei
2012-01-01
Throughout the world, measuring "learning outcomes" is viewed by many stakeholders as a relatively new method to judge the "value added" of colleges and universities. The potential to accurately measure learning gains is also a diagnostic tool for institutional self-improvement. This essay discussed the marketisation of…
NASA Astrophysics Data System (ADS)
Dobrovolný, P.; Brázdil, R.; Moberg, A.; Wilson, R.
2009-09-01
Various types of documentary evidence from Germany, Switzerland and the Czech Republic have been used to create temperature indices for the period 1500-1854. Homogenized temperature series from 11 Central European stations covering the period 1760-2007 served as target values to reconstruct monthly, seasonal and annual temperatures in Central Europe since AD 1500. Spatial coherency of the compiled Central European Temperature (CEuT) series is presented. The CEuT series is further used to define extremely cold/warm months and seasons and the spatial and temporal distribution of such extremes are presented in context of existing knowledge of climate variability within Europe. The CEuT extremes are compared to corresponding documentary based chronologies from other European countries or regions as well as reconstructions from other proxies (e.g. tree rings). The most pronounced cold/warm seasons are analyzed with respect to potential causes and also with respect to recent warming trends. We discuss the potential of documentary evidence to study weather and climate extremes and show that such data provide valuable information for studying past human response to climatic extremes.
Properties of Asymmetric Detrended Fluctuation Analysis in the time series of RR intervals
NASA Astrophysics Data System (ADS)
Piskorski, J.; Kosmider, M.; Mieszkowski, D.; Krauze, T.; Wykretowicz, A.; Guzik, P.
2018-02-01
Heart rate asymmetry is a phenomenon by which the accelerations and decelerations of heart rate behave differently, and this difference is consistent and unidirectional, i.e. in most of the analyzed recordings the inequalities have the same directions. So far, it has been established for variance and runs based types of descriptors of RR intervals time series. In this paper we apply the newly developed method of Asymmetric Detrended Fluctuation Analysis, which so far has mainly been used with economic time series, to the set of 420 stationary 30 min time series of RR intervals from young, healthy individuals aged between 20 and 40. This asymmetric approach introduces separate scaling exponents for rising and falling trends. We systematically study the presence of asymmetry in both global and local versions of this method. In this study global means "applying to the whole time series" and local means "applying to windows jumping along the recording". It is found that the correlation structure of the fluctuations left over after detrending in physiological time series shows strong asymmetric features in both magnitude, with α+ <α-, where α+ is related to heart rate decelerations and α- to heart rate accelerations, and the proportion of the signal in which the above inequality holds. A very similar effect is observed if asymmetric noise is added to a symmetric self-affine function. No such phenomena are observed in the same physiological data after shuffling or with a group of symmetric synthetic time series.
Geometric Methods for ATR: Shape Spaces, Metrics, Object/Image Relations, and Shapelets
2007-09-30
our techniques as a tool for adding depth information to existing video content. In addition, we learned that researchers at the University of...and only if Kr - 4 C L r - 3 C H r - l C r This fact and the incidence relations given in Theorem I, §5, Chapter VII of Hodge and Pedoe [4] give us our...Springer-Verlag, 1992. 4. W.V.D. Hodge and D. Pedoe , Methods of Algebraic Geometry, nos. 1, 2, and 3, in Mathematical Library Series, Cambridge
Daily rainfall forecasting for one year in a single run using Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Unnikrishnan, Poornima; Jothiprakash, V.
2018-06-01
Effective modelling and prediction of smaller time step rainfall is reported to be very difficult owing to its highly erratic nature. Accurate forecast of daily rainfall for longer duration (multi time step) may be exceptionally helpful in the efficient planning and management of water resources systems. Identification of inherent patterns in a rainfall time series is also important for an effective water resources planning and management system. In the present study, Singular Spectrum Analysis (SSA) is utilized to forecast the daily rainfall time series pertaining to Koyna watershed in Maharashtra, India, for 365 days after extracting various components of the rainfall time series such as trend, periodic component, noise and cyclic component. In order to forecast the time series for longer time step (365 days-one window length), the signal and noise components of the time series are forecasted separately and then added together. The results of the study show that the method of SSA could extract the various components of the time series effectively and could also forecast the daily rainfall time series for longer duration such as one year in a single run with reasonable accuracy.
The Recalibrated Sunspot Number: Impact on Solar Cycle Predictions
NASA Astrophysics Data System (ADS)
Clette, F.; Lefevre, L.
2017-12-01
Recently and for the first time since their creation, the sunspot number and group number series were entirely revisited and a first fully recalibrated version was officially released in July 2015 by the World Data Center SILSO (Brussels). Those reference long-term series are widely used as input data or as a calibration reference by various solar cycle prediction methods. Therefore, past predictions may now need to be redone using the new sunspot series, and methods already used for predicting cycle 24 will require adaptations before attempting predictions of the next cycles.In order to clarify the nature of the applied changes, we describe the different corrections applied to the sunspot and group number series, which affect extended time periods and can reach up to 40%. While some changes simply involve constant scale factors, other corrections vary with time or follow the solar cycle modulation. Depending on the prediction method and on the selected time interval, this can lead to different responses and biases. Moreover, together with the new series, standard error estimates are also progressively added to the new sunspot numbers, which may help deriving more accurate uncertainties for predicted activity indices. We conclude on the new round of recalibration that is now undertaken in the framework of a broad multi-team collaboration articulated around upcoming ISSI workshops. We outline the future corrections that can still be expected in the future, as part of a permanent upgrading process and quality control. From now on, future sunspot-based predictive models should thus be made more adaptable, and regular updates of predictions should become common practice in order to track periodic upgrades of the sunspot number series, just like it is done when using other modern solar observational series.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Taking a Concept to Commercialization: Designing Relevant Tests to Address Safety.
Ferrara, Lisa A
2016-04-01
Taking a product from concept to commercialization requires careful navigation of the regulatory pathway through a series of steps: (A) moving the idea through proof of concept and beyond; (B) evaluating new technologies that may provide added value to the idea; (C) designing appropriate test strategies and protocols; and (D) evaluating and mitigating risks. Moving an idea from the napkin stage of development to the final product requires a team effort. When finished, the product rarely resembles the original design, but careful steps throughout the product life cycle ensure that the product meets the vision.
Huang, Ling; Su, Tao; Shan, Wenjun; Luo, Zonghua; Sun, Yang; He, Feng; Li, Xingshu
2012-05-01
A series of berberine-phenyl-benzoheterocyclic (26-29) and tacrine-phenyl-benzoheterocyclic hybrids (44-46) were synthesised and evaluated as multifunctional anti-Alzheimer's disease agents. Compound 44b, tacrine linked with phenyl-benzothiazole by 3-carbon spacers, was the most potent AChE inhibitor with an IC(50) value of 0.017 μM. This compound demonstrated similar Aβ aggregation inhibitory activity with cucurmin (51.8% vs 52.1% at 20 μM, respectively), indicating that this hybrid is an excellent multifunctional drug candidate for AD. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
A method to quantify the "cone of economy".
Haddas, Ram; Lieberman, Isador H
2018-05-01
A non-randomized, prospective, concurrent control cohort study. The purpose of this study is to develop and evaluate a method to quantify the dimensions of the cone of economy (COE) and the energy expenditure associated with maintaining a balanced posture within the COE, scoliosis patients and compare them to matched non-scoliotic controls in a group of adult degenerative. Balance is defined as the ability of the human body to maintain its center of mass (COM) within the base of support with minimal postural sway. The cone of economy refers to the stable region of upright standing posture. The underlying assumption is that deviating outside one's individual cone challenges the balance mechanisms. Adult degenerative scoliosis (ADS) patients exhibit a variety of postural changes within their COE, involving the spine, pelvis and lower extremities, in their effort to compensate for the altered posture. Ten ADS patients and ten non-scoliotic volunteers performed a series of functional balance tests. The dimensions of the COE and the energy expenditure related to maintaining balance within the COE were measured using a human motion video capture system and dynamic surface electromyography. ADS patients presented more COM sway in the sagittal (ADS: 1.59 cm vs. H: 0.61 cm; p = 0.049) and coronal (ADS: 2.84 cm vs. H: 1.72 cm; p = 0.046) directions in comparison to the non-scoliotic control. ADS patients presented with more COM (ADS: 33.30 cm vs. H: 19.13 cm; p = 0.039) and head (ADS: 31.06 cm vs. H: 19.13 cm; p = 0.013) displacements in comparison to the non-scoliotic controls. Scoliosis patients expended more muscle activity to maintain static standing, as manifest by increased muscle activity in their erector spinae (ADS: 37.16 mV vs. H: 20.31 mV; p = 0.050), and gluteus maximus (ADS: 33.12 mV vs. H: 12.09 mV; p = 0.001) muscles. We were able to develop and evaluate a method that quantifies the COE boundaries, COM displacement, and amount of sway within the COE along with the energy expenditure for a specific patient. This method of COE measurement will enable spine care practitioners to objectively evaluate their patients in an effort to determine the most appropriate treatment options, and in objectively documenting the effectiveness of their intervention.
77 FR 10950 - Airworthiness Directives; General Electric Company (GE) Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
...) for all GE CF6-80C2B series turbofan engines. That AD currently requires installing software version 8.... This new AD requires the removal of the affected ECUs from service. This AD was prompted by two reports... ECUs from service. Comments We gave the public the opportunity to participate in developing this AD...
Sol-gel method for encapsulating molecules
Brinker, C. Jeffrey; Ashley, Carol S.; Bhatia, Rimple; Singh, Anup K.
2002-01-01
A method for encapsulating organic molecules, and in particular, biomolecules using sol-gel chemistry. A silica sol is prepared from an aqueous alkali metal silicate solution, such as a mixture of silicon dioxide and sodium or potassium oxide in water. The pH is adjusted to a suitably low value to stabilize the sol by minimizing the rate of siloxane condensation, thereby allowing storage stability of the sol prior to gelation. The organic molecules, generally in solution, is then added with the organic molecules being encapsulated in the sol matrix. After aging, either a thin film can be prepared or a gel can be formed with the encapsulated molecules. Depending upon the acid used, pH, and other processing conditions, the gelation time can be from one minute up to several days. In the method of the present invention, no alcohols are generated as by-products during the sol-gel and encapsulation steps. The organic molecules can be added at any desired pH value, where the pH value is generally chosen to achieve the desired reactivity of the organic molecules. The method of the present invention thereby presents a sufficiently mild encapsulation method to retain a significant portion of the activity of the biomolecules, compared with the activity of the biomolecules in free solution.
Nicoletti, Gabrieli; Cipolatti, Eliane P; Valério, Alexsandra; Carbonera, NatáliaThaisa Gamba; Soares, Nicole Spillere; Theilacker, Eron; Ninow, Jorge L; de Oliveira, Débora
2015-09-01
With the aim of studying the best method for the interaction of polyurethane (PU) foam and Candida antarctica lipase B, different methods of CalB immobilization were studied: adsorption (PU-ADS), bond (using polyethyleneimine) (PU-PEI), ionic adsorption by PEI with cross-linking with glutaraldehyde (PU-PEI-GA) and entrapment (PU). The characterization of immobilized enzyme derivatives was performed by apparent density and Fourier transform infrared spectroscopy. The free enzyme and enzyme preparations were evaluated at different pH values and temperatures. The highest enzyme activity was obtained using the PU method (5.52 U/g). The methods that stood out to compare the stabilities and kinetic parameters were the PU and PU-ADS. Conversions of 83.5 and 95.9 % for PU and PU-ADS derivatives were obtained, in 24 h reaction, using citronella oil and propionic acid as substrates.
METHOD OF SEPARATING TETRAVALENT PLUTONIUM VALUES FROM CERIUM SUB-GROUP RARE EARTH VALUES
Duffield, R.B.; Stoughton, R.W.
1959-02-01
A method is presented for separating plutonium from the cerium sub-group of rare earths when both are present in an aqueous solution. The method consists in adding an excess of alkali metal carbonate to the solution, which causes the formation of a soluble plutonium carbonate precipitate and at the same time forms an insoluble cerium-group rare earth carbonate. The pH value must be adjusted to bctween 5.5 and 7.5, and prior to the precipitation step the plutonium must be reduced to the tetravalent state since only tetravalent plutonium will form the soluble carbonate complex.
Roskovensky, John K [Albuquerque, NM
2009-01-20
A method of detecting clouds in a digital image comprising, for an area of the digital image, determining a reflectance value in at least three discrete electromagnetic spectrum bands, computing a first ratio of one reflectance value minus another reflectance value and the same two values added together, computing a second ratio of one reflectance value and another reflectance value, choosing one of the reflectance values, and concluding that an opaque cloud exists in the area if the results of each of the two computing steps and the choosing step fall within three corresponding predetermined ranges.
Circuits and methods for impedance determination using active measurement cancelation
Jamison, David K.
2016-12-13
A delta signal and opposite delta signal are generated such that a sum of the two signals is substantially zero. The delta signal is applied across a first set of electrochemical cells. The opposite delta signal is applied across a second set of electrochemical cells series connected to the first set. A first held voltage is established as the voltage across the first set. A second held voltage is established as the voltage across the second set. A first delta signal is added to the first held voltage and applied to the first set. A second delta signal is added to the second held voltage and applied to the second set. The current responses due to the added delta voltages travel only into the set associated with its delta voltage. The delta voltages and the current responses are used to calculate the impedances of their associated cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kincs, J.; Cho, J.; Bloyer, D.
1994-09-01
The T{sub g}`s and heat capacity functions have been measured for a series of Na{sub 2}S + B{sub 2}S{sub 3} glasses for the first time. Unlike the alkali borates, T{sub g} decreases rapidly as Na{sub 2}S is added to B{sub 2}S{sub 3}. This effect, even in the presence of a rapidly increasing fraction of tetrahedrally coordinated borons, has been associated with the ``over crosslinking`` effect of the sulfide ion. Unlike the borate glasses where each added oxygen produces two tetrahedral borons, the conversion rate for the thioborates is between four and six. This behavior is suggested to result in themore » formation of local tightly-bonded molecular-like structures that exhibit less long-range network bonding than the alkali borite glasses. A a result, T{sub g} decreases with added alkali in alkali thioborates rather than increases as in the alkali borate glasses. The change in heat capacity at T{sub g}, {Delta}C{sub p}(T{sub g}) has been carefully measured and is found to also decrease dramatically as alkali sulfide is added to the glass. Again this effect is opposite to the trends observed for the alkali borate glasses. The decreasing {Delta}C{sub p}(T{sub g}) occurs even in the presence of a decreasing T{sub g}. The authors have tentatively associated the diminishing {Delta}C{sub p}(T{sub g}) values to the decreasing density of the configurational states above T{sub g}. This is attributed to the high coordination number and site specificity caused by the added alkali sulfide. The glassy state heat capacities were analyzed and found to reach {approximately}90% of the classical limiting DuLong-Petit value just below T{sub g} for all glasses. This was used to suggest that the diminishing {Delta}C{sub p}(T{sub g}) values are associated with a unique behavior in the system to become a liquid with very little change in the density of configurational states.« less
NASA Astrophysics Data System (ADS)
Kalugin, I.; Darin, A.; Ovchinnikov, D.; Myglan, V.; Babich, V.
2010-09-01
Our knowledge of climate change with associated forcing during the last thousand-years remains limited because that cannot be studied thoroughly by instrumental data. So it is an actual task to find high resolution paleoclimate records and to compare it with recent patterns of short-period oscillations. Combination of lake sediments and tree rings appears to be effective for understanding of regional climate forcings. There are several dendrochronologies (up to 1700 years long) and comparable annual reconstructions by Teletskoye Lake sediments (1500 years) in Altai region. They are calibrated by data from 14 local weather stations (time series up to 80 years) and Barnaul station (170 years) as well. We used tree-ring series together with element contents in sediments as an additional proxy for calculation of transfer function, considering that tree-ring series are responded to summer temperature in this climatic zone. Such combined version allows taking one more independent environmental indication for objective reconstructions. Element content in sediments is provided by X-ray Fluorescence on Synchrotron Radiation microanalysis with scanning step up to 0.1mm. Age model is based on three strong dates: AD 1963 by 137Cs, and AD 897 and 1540 by radiocarbon. Time series of both annual temperature and precipitation from AD 450 to 2000 are obtained from Teletskoye Lake sediments by multiple regression and artificial neural network methods using transfer function trained by meteodata. Revealed climatic proxies (Br, U and Ti content, Sr/Rb ratio and X-ray density) appear to be fundamental for silt-clay organic-bearing sediments because the same correlation is determined in standard samples from European as well as from Siberian, Chinese and Mongolian cores. The characteristic periods for northern hemisphere such as medieval warming and Little Ice Age known in the Europe and in other Asian areas (China) are revealed in Siberian region. The spectral analysis of temperatures and humidity time series revealed the subdecade up to multidecade periods of harmonious fluctuations over the both instrumental (170 years) and restored (1500) time intervals. Some of cycles coincide within both dataset as well as with global cyclicity of atmospheric circulation.
2014-01-01
Background Qualitative research is undertaken with randomized controlled trials of health interventions. Our aim was to explore the perceptions of researchers with experience of this endeavour to understand the added value of qualitative research to the trial in practice. Methods A telephone semi-structured interview study with 18 researchers with experience of undertaking the trial and/or the qualitative research. Results Interviewees described the added value of qualitative research for the trial, explaining how it solved problems at the pretrial stage, explained findings, and helped to increase the utility of the evidence generated by the trial. From the interviews, we identified three models of relationship of the qualitative research to the trial. In ‘the peripheral’ model, the trial was an opportunity to undertake qualitative research, with no intention that it would add value to the trial. In ‘the add-on’ model, the qualitative researcher understood the potential value of the qualitative research but it was viewed as a separate and complementary endeavour by the trial lead investigator and wider team. Interviewees described how this could limit the value of the qualitative research to the trial. Finally ‘the integral’ model played out in two ways. In ‘integral-in-theory’ studies, the lead investigator viewed the qualitative research as essential to the trial. However, in practice the qualitative research was under-resourced relative to the trial, potentially limiting its ability to add value to the trial. In ‘integral-in-practice’ studies, interviewees described how the qualitative research was planned from the beginning of the study, senior qualitative expertise was on the team from beginning to end, and staff and time were dedicated to the qualitative research. In these studies interviewees described the qualitative research adding value to the trial although this value was not necessarily visible beyond the original research team due to the challenges of publishing this research. Conclusions Health researchers combining qualitative research and trials viewed this practice as strengthening evaluative research. Teams viewing the qualitative research as essential to the trial, and resourcing it in practice, may have a better chance of delivering its added value to the trial. PMID:24913438
Dynamic Restarting Schemes for Eigenvalue Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Simon, Horst D.
1999-03-10
In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Petti, S; Renzini, G
1994-03-01
The percentage of anaerobic micro-organisms in the subgingival microflora represents a simple microbiological index which not only refers to the state but also the risks of periodontal health. The present study aimed to compare two different methods of calculating this index. The study was performed in 45 subjects with moderate gingivitis provoked by the previous application of dental fixtures anchored to both arches. A sample of subgingival microflora was collected from each patient at the level of the vestibular gingival sulcus of the first upper right molar. This was then vortexed, diluted and inoculated in three series of plates. It was chosen to use Walker's culture medium. The total bacterial count was evaluated by incubating the first series of plates in anaerobiosis; the anaerobic bacterial was calculated by subtracting from the total the of facultative aerobic-anaerobic micro-organisms, which in turn was obtained using two methods: the first (method AE) consisted of incubating another series of plates in aerobiosis; the second (method M) involved incubating the last series of plates in anaerobiosis, and adding metronidazole to the culture medium in a solution of 2.5 mg/l. The plates were then kept at 37 degrees C for seven days. The mean percentage of anaerobic microorganisms, given by the percentage ratio between anaerobic and total, relating to the 45 cases studied, was as follows: using method AE: 57.8 +/- 26.3%, and using method M: 40.2 +/- 27.2%. Both figures come close to that proposed and calculated using a much more sophisticated method by Slots, namely 41.5 +/- 19.2% in the event of gingivitis.(ABSTRACT TRUNCATED AT 250 WORDS)
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
Taniguchi, Hironori; Okano, Kenji; Honda, Kohsuke
2017-06-01
Bio-based chemical production has drawn attention regarding the realization of a sustainable society. In vitro metabolic engineering is one of the methods used for the bio-based production of value-added chemicals. This method involves the reconstitution of natural or artificial metabolic pathways by assembling purified/semi-purified enzymes in vitro . Enzymes from distinct sources can be combined to construct desired reaction cascades with fewer biological constraints in one vessel, enabling easier pathway design with high modularity. Multiple modules have been designed, built, tested, and improved by different groups for different purpose. In this review, we focus on these in vitro metabolic engineering modules, especially focusing on the carbon metabolism, and present an overview of input modules, output modules, and other modules related to cofactor management.
NASA Astrophysics Data System (ADS)
Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker
2018-04-01
A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is distribution-free, robust to spatial dependence, and accounts for time series structure.
Luo, Zonghua; Liang, Liang; Sheng, Jianfei; Pang, Yanqing; Li, Jianheng; Huang, Ling; Li, Xingshu
2014-02-15
A series of ebselen derivatives were designed, synthesised and evaluated as inhibitors of cholinesterases (ChEs) and glutathione peroxidase (GPx) mimics. Most of the compounds were found to be potent against AChEs and BuChE, compounds 5e and 5i, proved to be the most potent against AChE with IC₅₀ values of 0.76 and 0.46 μM, respectively. Among these hybrids, most of the compounds were found to be good GPx mimics compare with ebselen. The selected compounds 5e and 5i were also used to determine the catalytic parameters and in vitro hydrogen peroxide scavenging activity. The results indicate that compounds 5e and 5i may be excellent multifunctional agents for the treatment of AD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hori, Koji; Konishi, Kimiko; Hachisu, Mitsugu
2011-06-01
We reviewed the importance of measuring serum anticholinergic activity (SAA) in patients with Alzheimer's disease (AD). Since Tune and Coyle reported a simple method for assessing SAA using radioreceptor-binding assay, SAA is assumed to be the cumulative activity of parent medications and their metabolites and its relationship with delirium and cognitive functions has been debated. However, we evaluated the SAA in AD patients and SAA was correlated with prescription of antipsychotic medications, cognitive dysfunctions, severity of AD and psychotic symptoms, especially, with delusion and diurnal rhythm disturbance. From these results, we should not only pay attention to avoiding the prescription of medications with anticholinergic activity but also we speculated that AA appeared endogenously in AD and accelerated AD pathology. Moreover, there might be the possibility that SAA has predictive value for assessing the progressiveness of AD and as a biological marker for AD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD
2009-05-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) microwave radiometer (MWR) RETrievel (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD; McFarlane, SA
2011-07-25
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility microwave radiometer (MWR) Retrieval (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
Age and neurodegeneration imaging biomarkers in persons with Alzheimer disease dementia
Jack, Clifford R.; Wiste, Heather J.; Weigand, Stephen D.; Vemuri, Prashanthi; Lowe, Val J.; Kantarci, Kejal; Gunter, Jeffrey L.; Senjem, Matthew L.; Mielke, Michelle M.; Machulda, Mary M.; Roberts, Rosebud O.; Boeve, Bradley F.; Jones, David T.; Petersen, Ronald C.
2016-01-01
Objective: To examine neurodegenerative imaging biomarkers in Alzheimer disease (AD) dementia from middle to old age. Methods: Persons with AD dementia and elevated brain β-amyloid with Pittsburgh compound B (PiB)-PET imaging underwent [18F]-fluorodeoxyglucose (FDG)-PET and structural MRI. We evaluated 3 AD-related neurodegeneration biomarkers: hippocampal volume adjusted for total intracranial volume (HVa), FDG standardized uptake value ratio (SUVR) in regions of interest linked to AD, and cortical thickness in AD-related regions of interest. We examined associations of each biomarker with age and evaluated age effects on cutpoints defined by the 90th percentile in AD dementia. We assembled an age-, sex-, and intracranial volume-matched group of 194 similarly imaged clinically normal (CN) persons. Results: The 97 participants with AD dementia (aged 49–93 years) had PiB SUVR ≥1.8. A nonlinear (inverted-U) relationship between FDG SUVR and age was seen in the AD group but an inverse linear relationship with age was seen in the CN group. Cortical thickness had an inverse linear relationship with age in AD but a nonlinear (flat, then inverse linear) relationship in the CN group. HVa showed an inverse linear relationship with age in both AD and CN groups. Age effects on 90th percentile cutpoints were small for FDG SUVR and cortical thickness, but larger for HVa. Conclusions: In persons with AD dementia with elevated PiB SUVR, values of each neurodegeneration biomarker were associated with age. Cortical thickness had the smallest differences in 90th percentile cutpoints from middle to old age, and HVa the largest differences. PMID:27421543
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory
Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.
Yang, Haimin; Pan, Zhisong; Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
Harish, Kikkeri P; Mohana, Kikkeri N; Mallesha, Lingappa; Veeresh, Bantal
2014-04-01
A series of new 2-methyl-2-[3-(5-piperazin-1-yl-[1,3,4]oxadiazol-2-yl)-phenyl]-propionitrile derivatives 8a-o, 9a-c, 10a-d, and 11a-d were synthesized to meet the structural requirements essential for anticonvulsant property. The structures of all the synthesized compounds were confirmed by means of (1)H NMR, (13)C NMR, and mass spectral studies. The purity of the novel compounds was confirmed by elemental analyses. All the compounds were screened for their anticonvulsant activity against maximal electroshock (MES) seizure method and their neurotoxic effects were determined by rotorod test. Compounds 8d, 8e, and 8f were found to be the most potent of this series. The same compounds showed no neurotoxicity at the maximum dose administered (100 mg/kg). The efforts were also made to establish the structure-activity relationships among the synthesized compounds. The pharmacophore model was used to validate the anticonvulsant activity of the synthesized molecules. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Measuring information interactions on the ordinal pattern of stock time series
NASA Astrophysics Data System (ADS)
Zhao, Xiaojun; Shang, Pengjian; Wang, Jing
2013-02-01
The interactions among time series as individual components of complex systems can be quantified by measuring to what extent they exchange information among each other. In many applications, one focuses not on the original series but on its ordinal pattern. In such cases, trivial noises appear more likely to be filtered and the abrupt influence of extreme values can be weakened. Cross-sample entropy and inner composition alignment have been introduced as prominent methods to estimate the information interactions of complex systems. In this paper, we modify both methods to detect the interactions among the ordinal pattern of stock return and volatility series, and we try to uncover the information exchanges across sectors in Chinese stock markets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2010 CFR
2010-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2013 CFR
2013-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2012 CFR
2012-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2011 CFR
2011-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
ERIC Educational Resources Information Center
Ferrão, Maria Eugénia; Couto, Alcino Pinto
2014-01-01
This article focuses on the use of a value-added approach for promoting school improvement. It presents yearly value-added estimates, analyses their stability over time, and discusses the contribution of this methodological approach for promoting school improvement programmes in the Portuguese system of evaluation. The value-added model is applied…
How One School Implements and Experiences Ohio's Value-Added Model: A Case Study
ERIC Educational Resources Information Center
Quattrochi, David
2009-01-01
Ohio made value-added law in 2003 and incorporated value-added assessment to its operating standards for teachers and administrators in 2006. Value-added data is used to determine if students are making a year's growth at the end of each school year. Schools and districts receive a rating of "Below Growth, Met Growth, or Above Growth" on…
Estimating trends in atmospheric water vapor and temperature time series over Germany
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens
2017-08-01
Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius-Clapeyron equation.
Quantification of Labile Soil Mercury by Stable Isotope Dilution Techniques
NASA Astrophysics Data System (ADS)
Shetaya, Waleed; Huang, Jen-How; Osterwalder, Stefan; Alewell, Christine
2016-04-01
Mercury (Hg) is a toxic element that can cause severe health problems to humans. Mercury is emitted to the atmosphere from both natural and anthropogenic sources and can be transported over long distances before it is deposited to aquatic and terrestrial environments. Aside from accumulation in soil solid phases, Hg deposited in soils may migrate to surface- and ground-water or enter the food chain, depending on its lability. There are many operationally-defined extraction methods proposed to quantify soil labile metals. However, these methods are by definition prone to inaccuracies such as non-selectivity, underestimation or overestimation of the labile metal pool. The isotopic dilution technique (ID) is currently the most promising method for discrimination between labile and non-labile metal fractions in soil with a minimum disturbance to soil-solid phases. ID assesses the reactive metal pool in soil by defining the fraction of metal both in solid and solution phases that is isotopically-exchangeable known as the 'E-value'. The 'E-value' represents the metal fraction in a dynamic equilibrium with the solution phase and is potentially accessible to plants. This is carried out by addition of an enriched metal isotope to soil suspensions and quantifying the fraction of metal that is able to freely exchange with the added isotope by measuring the equilibrium isotopic ratio by ICP-MS. E-value (mg kg-1) is then calculated as follows: E-Value = (Msoil/ W) (CspikeVspike/ Mspike) (Iso1IAspike -Iso2IAspikeRss / Iso2IAsoil Rss - Iso1IAsoil) where M is the average atomic mass of the metal in the soil or the spike, W is the mass of soil (kg), Cspike is the concentration of the metal in the spike (mg L-1), Vspike is the volume of spike (L), IA is isotopic abundance, and Rss is the equilibrium ratio of isotopic abundances (Iso1:Iso2). Isotopic dilution has been successfully applied to determine E-values for several elements. However, to our knowledge, this method has not yet been applied to estimate the labile pool of mercury in contaminated soils. We performed a series of soil incubations spiked with 196Hg2+aiming at measuring and modelling the progressive assimilation of Hg ions into less labile forms. Soils with a wide range of characteristics are taken for our research purpose, inclusive of Hg concentrations ranging from 0.1 to 390 mg kg-1, pH between 3.5 - 7.5 and total organic carbon (TOC) between 2.5 - 8 %. In parallel, the labile pool of Hg estimated using ID will be compared with that determined using conventional extraction methods, e.g. sequential extraction procedures. These altogether allows us to answer (1) how the E-value of Hg in soils is comparable to those estimated based on selective extraction methods, (2) how the labile Hg correlates with the total soil Hg, soil pH and TOC, and (3) how the solubility of added Hg (e.g. via rainfall) decreased in soils of different properties during aging. The obtained results fills the knowledge gap concerning Hg biogeochemistry in the terrestrial environment and serves as a basis for estimating (and predicting) the risk of soil Hg diffusion from a point source to the adjacent environments.
Design and Development of Wireless Power Transmission for Unmanned Air Vehicles
2012-09-01
ELECTRONIC WARFARE SYSTEMS ENGINEERING and MASTER OF SCIENCE IN ELECTRICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL September 2012...Agilent Advanced Design System (ADS). Tuning elements were added and adjusted in order to optimize the efficiency. A maximum efficiency of 57% was...investigated by a series of simulations using Agilent Advanced Design System (ADS). Tuning elements were added and adjusted
Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach
NASA Astrophysics Data System (ADS)
Thomas, C.; Lark, R. M.
2013-12-01
Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.
Rapatsa, J; Terapongtanakorn, S
2010-03-15
The experiment was carried out at the Faculty of Agriculture, Ubon Ratchathani University during November 2006 to July 2007. A Completely Randomized Design (CRD) with four replications was used. Six treatments were allocated into two experimental fields, i.e., field A, animal manures added soil. Field B, chemical fertilizers added soil and both fields have been used for chili cultivation for more than 5 years and they belong to Warin soil series (Oxic Paleustults). The results showed that mean values of soil pH and organic matter % of field A were much higher than field B but mean values of nitrogen % and phosphorus were much higher for field B than field A. Exchangeable potassium were inadequately available in all treatments. All treatments of field B gave excessive amounts of available phosphorus at a toxic level. T3 of field A gave higher plant height, total dry weight plant(-1), pod fresh and dry weights plant(-1) than T5 of field B. Of overall results in terms of growth and yields of chili plants, field A gave much better advantages over field B. The CO2 uptake and CO2 in leaves were higher for field A than field B. Polyamines of putrescine (Put), spermidine (Spd) and spermine (Spm) of T2 were affected by stress conditions due to previous applied chemical fertilisers. Available phosphorus mean values in most treatments were excessively available. Amounts of polyamines in chili leaves due to the added organic manure and chemical fertilizers (T3 up to T6) were not cleared.
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, A
Purpose: Novel linac machines, TrueBeam (TB) and Elekta Versa have updated head designing and software control system, include flattening-filter-free (FFF) photon and electron beams. Later on FFF beams were also introduced on C-Series machines. In this work FFF beams for same energy 6MV but from different machine versions were studied with reference to beam data parameters. Methods: The 6MV-FFF percent depth doses, profile symmetry and flatness, dose rate tables, and multi-leaf collimator (MLC) transmission factors were measured during commissioning process of both C-series and Truebeam machines. The scanning and dosimetric data for 6MV-FFF beam from Truebeam and C-Series linacs wasmore » compared. A correlation of 6MV-FFF beam from Elekta Versa with that of Varian linacs was also found. Results: The scanning files were plotted for both qualitative and quantitative analysis. The dosimetric leaf gap (DLG) for C-Series 6MV-FFF beam is 1.1 mm. Published values for Truebeam dosimetric leaf gap is 1.16 mm. 6MV MLC transmission factor varies between 1.3 % and 1.4 % in two separate measurements and measured DLG values vary between 1.32 mm and 1.33 mm on C-Series machine. MLC transmission factor from C-Series machine varies between 1.5 % and 1.6 %. Some of the measured data values from C-Series FFF beam are compared with Truebeam representative data. 6MV-FFF beam parameter values like dmax, OP factors, beam symmetry and flatness and additional parameters for C-Series and Truebeam liancs will be presented and compared in graphical form and tabular data form if selected. Conclusion: The 6MV flattening filter (FF) beam data from C-Series & Truebeam and 6MV-FFF beam data from Truebeam has already presented. This particular analysis to compare 6MV-FFF beam from C-Series and Truebeam provides opportunity to better elaborate FFF mode on novel machines. It was found that C-Series and Truebeam 6MV-FFF dosimetric and beam data was quite similar.« less
Homogenisation of minimum and maximum air temperature in northern Portugal
NASA Astrophysics Data System (ADS)
Freitas, L.; Pereira, M. G.; Caramelo, L.; Mendes, L.; Amorim, L.; Nunes, L.
2012-04-01
Homogenization of minimum and maximum air temperature has been carried out for northern Portugal for the period 1941-2010. The database corresponds to the values of the monthly arithmetic averages calculated from daily values observed at stations within the network of stations managed by the national Institute of Meteorology (IM). Some of the weather stations of IM's network are collecting data for more than a century; however, during the entire observing period, some factors have affected the climate series and have to be considered such as, changes in the station surroundings and changes related to replacement of manually operated instruments. Besides these typical changes, it is of particular interest the station relocation to rural areas or to the urban-rural interface and the installation of automatic weather stations in the vicinity of the principal or synoptic stations with the aim of replacing them. The information from these relocated and new stations was merged to produce just one but representative time series of that site. This process starts at the end 90's and the information of the time series fusion process constitutes the set of metadata used. Two basic procedures were performed: (i) preliminary statistical and quality control analysis; and, (ii) detection and correction of problems of homogeneity. In the first case, was developed and used software for quality control, specifically dedicated for the detection of outliers, based on the quartile values of the time series itself. The analysis of homogeneity was performed using the MASH (Multiple Analysis of Series for Homogenisation) and HOMER, which is a software application developed and recently made available within the COST Action ES0601 (COST-ES0601, 2012). Both methods provide a fast quality control of the original data and were developed for automatic processing, analyzing, homogeneity testing and adjusting of climatological data, but manual usage is also possible. Obtained results with both methods will be presented, compared and discussed along with the results of the sensitivity tests performed with both methods. COST-ES0601, 2012: "ACTION COST-ES0601 - Advances in homogenisation methods of climate series: an integrated approach HOME". Available at http://www.homogenisation.org/v_02_15/ [accessed 3 January 2012].
Improved method for extracting lanthanides and actinides from acid solutions
Horwitz, E.P.; Kalina, D.G.; Kaplan, L.; Mason, G.W.
1983-07-26
A process for the recovery of actinide and lanthanide values from aqueous acidic solutions uses a new series of neutral bi-functional extractants, the alkyl(phenyl)-N,N-dialkylcarbamoylmethylphosphine oxides. The process is suitable for the separation of actinide and lanthanide values from fission product values found together in high-level nuclear reprocessing waste solutions.
Sample entropy applied to the analysis of synthetic time series and tachograms
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.
2017-01-01
Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.
Total recovery of the waste of two-phase olive oil processing: isolation of added-value compounds.
Fernández-Bolaños, Juan; Rodríguez, Guillermo; Gómez, Esther; Guillén, Rafael; Jiménez, Ana; Heredia, Antonia; Rodríguez, Rocío
2004-09-22
A process for the value addition of solid waste from two-phase olive oil extraction or "alperujo" that includes a hydrothermal treatment has been suggested. In this treatment an autohydrolysis process occurs and the solid olive byproduct is partially solubilized. From this water-soluble fraction can be obtained besides the antioxidant hydroxytyrosol several other compounds of high added value. In this paper three different samples of alperujo were characterized and subjected to a hydrothermal treatment with and without acid catalyst. The main soluble compounds after the hydrolysis were represented by monosaccharides xylose, arabinose, and glucose; oligosaccharides, mannitol and products of sugar destruction. Oligosaccharides were separated by size exclusion chromatography. It was possible to get highly purified mannitol by applying a simple purification method.
An improvement of the measurement of time series irreversibility with visibility graph approach
NASA Astrophysics Data System (ADS)
Wu, Zhenyu; Shang, Pengjian; Xiong, Hui
2018-07-01
We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.
ERIC Educational Resources Information Center
Koedel, Cory; Betts, Julian
2009-01-01
Value-added measures of teacher quality may be sensitive to the quantitative properties of the student tests upon which they are based. This paper focuses on the sensitivity of value- added to test-score-ceiling effects. Test-score ceilings are increasingly common in testing instruments across the country as education policy continues to emphasize…
Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
76 FR 42610 - Airworthiness Directives; Turbomeca Arriel 1 Series Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... products listed above. The existing AD currently requires removing from service certain gas generator..., whichever occurs later. That AD also requires removing from service certain gas generator second stage... affected discs. We are proposing this AD to prevent failure of the gas generator second stage turbine disc...
78 FR 60798 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
...-0363; Directorate Identifier 2013-NM-031-AD] RIN 2120-AA64 Airworthiness Directives; Airbus Airplanes... directive (AD) for all Airbus Model A330-200, -300 and -200 Freighter series airplanes, and Model A340-200... information identified in this proposed AD, contact Airbus SAS--Airworthiness Office--EAL, 1 Rond Point...
77 FR 75825 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
... Airworthiness Directives; Airbus Airplanes AGENCY: Federal Aviation Administration (FAA), Department of... Airbus Model A330-200 and -300 series airplanes. This AD was prompted by a report of a prematurely... [Amended] 0 2. The FAA amends Sec. 39.13 by adding the following new AD: 2012-25-12 Airbus: Amendment 39...
77 FR 57003 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-17
... Airworthiness Directives; Airbus Airplanes AGENCY: Federal Aviation Administration (FAA), Department of... Airbus Model A318, A319, and A320 series airplanes. This AD was prompted by a report of a torn out.... The FAA amends Sec. 39.13 by adding the following new AD: 2012-18-12 Airbus: Amendment 39-17189...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-08
... reports of fuel leakage from the center tank. We are proposing this AD to detect and correct improperly... Condition (e) This AD results from reports of fuel leakage from the center tank. We are issuing this AD to...
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; di Rocco, Stefania; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; Peter, Thomas; Davison, Anthony C.
2010-05-01
Tools from geostatistics and extreme value theory are applied to analyze spatial correlations in total ozone for the southern mid-latitudes. The dataset used in this study is the NIWA-assimilated total ozone dataset (Bodeker et al., 2001; Müller et al., 2008). Recently new tools from extreme value theory (Coles, 2001; Ribatet, 2007) have been applied to the world's longest total ozone record from Arosa, Switzerland (e.g. Staehelin 1998a,b) and 5 other long-term ground based stations to describe extreme events in low and high total ozone (Rieder et al., 2010a,b,c). Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more of such fingerprints than conventional time series analysis on basis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b,c). Within the current study patterns in spatial correlation and frequency distributions of extreme events (e.g. ELOs and EHOs) are studied for the southern mid-latitudes. It is analyzed if "fingerprints"found for features in the northern hemisphere occur also in the southern mid-latitudes. New insights in spatial patterns of total ozone for the southern mid-latitudes are presented. Within this study the influence of changes in atmospheric dynamics (e.g. tropospheric and lower stratospheric pressure systems, ENSO) as well as influence of major volcanic eruptions (e.g. Mt. Pinatubo) and ozone depleting substances (ODS) on column ozone over the southern mid-latitudes is analyzed for the time period 1979-2007. References: Bodeker, G.E., J.C. Scott, K. Kreher, and R.L. McKenzie, Global ozone trends in potential vorticity coordinates using TOMS and GOME intercompared against the Dobson network: 1978-1998, J. Geophys. Res., 106 (D19), 23029-23042, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Müller, R., Grooß, J.-U., Lemmen, C., Heinze, D., Dameris, M., and Bodeker, G.: Simple measures of ozone depletion in the polar stratosphere, Atmos. Chem. Phys., 8, 251-264, 2008. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L.M., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.
2014-01-01
We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
78 FR 53638 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-30
.... Model CL-600-2C10 (Regional Jet Series 700, 701, & 702) airplanes, Model CL-600-2D15 (Regional Jet Series 705) airplanes, Model CL-600-2D24 (Regional Jet Series 900) airplanes, and Model CL- 600-2E25 (Regional Jet Series 1000) airplanes. This AD was prompted by reports of erratic pitch movement and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-28
.... TPE331-10 and TPE331-11 Series Turboprop Engines AGENCY: Federal Aviation Administration, DOT. ACTION... and TPE331-11 series turboprop engines. That action would have required adding 360 first stage turbine... series turboprop engines, was published in the Federal Register on June 22, 2010 (75 FR 35354). The...
Code of Federal Regulations, 2014 CFR
2014-04-01
... numbering system of any series reaches “1,000,000” the proprietor may begin the series again by adding an alphabetical prefix or suffix to the series; and (iii) When there is a change in proprietorship or a change in the individual, firm, corporate name, or trade name, the series in use at the time of the change may...
Research on signal processing method for total organic carbon of water quality online monitor
NASA Astrophysics Data System (ADS)
Ma, R.; Xie, Z. X.; Chu, D. Z.; Zhang, S. W.; Cao, X.; Wu, N.
2017-08-01
At present, there is no rapid, stable and effective approach of total organic carbon (TOC) measurement in the Marine environmental online monitoring field. Therefore, this paper proposes an online TOC monitor of chemiluminescence signal processing method. The weak optical signal detected by photomultiplier tube can be enhanced and converted by a series of signal processing module: phase-locked amplifier module, fourth-order band pass filter module and AD conversion module. After a long time of comparison test & measurement, compared with the traditional method, on the premise of sufficient accuracy, this chemiluminescence signal processing method can offer greatly improved measuring speed and high practicability for online monitoring.
McDermott, Imelda; Checkland, Kath; Coleman, Anna; Osipovič, Dorota; Petsoulas, Christina; Perkins, Neil
2017-01-01
Objectives To explore the 'added value' that general practitioners (GPs) bring to commissioning in the English NHS. We describe the experience of Clinical Commissioning Groups (CCGs) in the context of previous clinically led commissioning policy initiatives. Methods Realist evaluation. We identified the programme theories underlying the claims made about GP 'added value' in commissioning from interviews with key informants. We tested these theories against observational data from four case study sites to explore whether and how these claims were borne out in practice. Results The complexity of CCG structures means CCGs are quite different from one another with different distributions of responsibilities between the various committees. This makes it difficult to compare CCGs with one another. Greater GP involvement was important but it was not clear where and how GPs could add most value. We identified some of the mechanisms and conditions which enable CCGs to maximize the 'added value' that GPs bring to commissioning. Conclusion To maximize the value of clinical input, CCGs need to invest time and effort in preparing those involved, ensuring that they systematically gather evidence about service gaps and problems from their members, and engaging members in debate about the future shape of services.
Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin
2018-06-15
In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
ERIC Educational Resources Information Center
Rodgers, Timothy
2007-01-01
The 2003 UK higher education White Paper suggested that the sector needed to re-examine the potential of the value added concept. This paper describes a possible methodology for developing a performance indicator based on the economic value added to graduates. The paper examines how an entry-quality-adjusted measure of a graduate's…
Clinical utility of FDG-PET for the differential diagnosis among the main forms of dementia.
Nestor, Peter J; Altomare, Daniele; Festari, Cristina; Drzezga, Alexander; Rivolta, Jasmine; Walker, Zuzana; Bouwman, Femke; Orini, Stefania; Law, Ian; Agosta, Federica; Arbizu, Javier; Boccardi, Marina; Nobili, Flavio; Frisoni, Giovanni Battista
2018-05-07
To assess the clinical utility of FDG-PET as a diagnostic aid for differentiating Alzheimer's disease (AD; both typical and atypical forms), dementia with Lewy bodies (DLB), frontotemporal lobar degeneration (FTLD), vascular dementia (VaD) and non-degenerative pseudodementia. A comprehensive literature search was conducted using the PICO model to extract evidence from relevant studies. An expert panel then voted on six different diagnostic scenarios using the Delphi method. The level of empirical study evidence for the use of FDG-PET was considered good for the discrimination of DLB and AD; fair for discriminating FTLD from AD; poor for atypical AD; and lacking for discriminating DLB from FTLD, AD from VaD, and for pseudodementia. Delphi voting led to consensus in all scenarios within two iterations. Panellists supported the use of FDG-PET for all PICOs-including those where study evidence was poor or lacking-based on its negative predictive value and on the assistance it provides when typical patterns of hypometabolism for a given diagnosis are observed. Although there is an overall lack of evidence on which to base strong recommendations, it was generally concluded that FDG-PET has a diagnostic role in all scenarios. Prospective studies targeting diagnostically uncertain patients for assessing the added value of FDG-PET would be highly desirable.
Acute Diarrheal Syndromic Surveillance
Kam, H.J.; Choi, S.; Cho, J.P.; Min, Y.G.; Park, R.W.
2010-01-01
Objective In an effort to identify and characterize the environmental factors that affect the number of patients with acute diarrheal (AD) syndrome, we developed and tested two regional surveillance models including holiday and weather information in addition to visitor records, at emergency medical facilities in the Seoul metropolitan area of Korea. Methods With 1,328,686 emergency department visitor records from the National Emergency Department Information system (NEDIS) and the holiday and weather information, two seasonal ARIMA models were constructed: (1) The simple model (only with total patient number), (2) the environmental factor-added model. The stationary R-squared was utilized as an in-sample model goodness-of-fit statistic for the constructed models, and the cumulative mean of the Mean Absolute Percentage Error (MAPE) was used to measure post-sample forecast accuracy over the next 1 month. Results The (1,0,1)(0,1,1)7 ARIMA model resulted in an adequate model fit for the daily number of AD patient visits over 12 months for both cases. Among various features, the total number of patient visits was selected as a commonly influential independent variable. Additionally, for the environmental factor-added model, holidays and daily precipitation were selected as features that statistically significantly affected model fitting. Stationary R-squared values were changed in a range of 0.651-0.828 (simple), and 0.805-0.844 (environmental factor-added) with p<0.05. In terms of prediction, the MAPE values changed within 0.090-0.120 and 0.089-0.114, respectively. Conclusion The environmental factor-added model yielded better MAPE values. Holiday and weather information appear to be crucial for the construction of an accurate syndromic surveillance model for AD, in addition to the visitor and assessment records. PMID:23616829