Dynamic control of remelting processes
Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.
2000-01-01
An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.
The HPT Value Proposition in the Larger Improvement Arena.
ERIC Educational Resources Information Center
Wallace, Guy W.
2003-01-01
Discussion of human performance technology (HPT) emphasizes the key variable, which is the human variable. Highlights include the Ishikawa Diagram; human performance as one variable of process performance; collaborating with other improvement approaches; value propositions; and benefits to stakeholders, including real return on investments. (LRW)
Selecting the process variables for filament winding
NASA Technical Reports Server (NTRS)
Calius, E.; Springer, G. S.
1986-01-01
A model is described which can be used to determine the appropriate values of the process variables for filament winding cylinders. The process variables which can be selected by the model include the winding speed, fiber tension, initial resin degree of cure, and the temperatures applied during winding, curing, and post-curing. The effects of these process variables on the properties of the cylinder during and after manufacture are illustrated by a numerical example.
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Siopsis, George; Weedbrook, Christian
2018-02-01
With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
Jiménez, L; Pérez, I; López, F; Ariza, J; Rodríguez, A
2002-06-01
The influence of independent variables in the pulping of wheat straw by use of an ethanol-acetone-water mixture [processing temperature and time, ethanol/(ethanol + acetone) value and (ethanol + acetone)/(ethanol + acetone + water) value] and of the number of PFI beating revolutions to which the pulp was subjected, on the properties of the resulting pulp (yield and Shopper-Riegler index) and of the paper sheets obtained from it (breaking length, stretch, burst index and tear index) was examined. By using a central composite factor design and the BMDP software suite, equations that relate each dependent variable to the different independent variables were obtained that reproduced the experimental results for the dependent variables with errors less than 30% at temperatures, times, ethanol/(ethanol + acetone) value, (ethanol + acetone)/(ethanol + acetone + water) value and numbers of PFI beating revolutions in the ranges 140-180 degrees C, 60-120 min, 25-75%, 35-75% and 0-1750, respectively. Using values of the independent variables over the variation ranges considered provided the following optimum values of the dependent variables: 78.17% (yield), 15.21 degrees SR (Shopper-Riegler index), 5265 m (breaking length), 1.94% (stretch), 2.53 kN/g (burst index) and 4.26 mN m2/g (tear index). Obtaining reasonably good paper sheets (with properties that differed by less than 15% from their optimum values except for the burst index, which was 28% lower) entailed using a temperature of 180 degrees C, an ethanol/(ethanol + acetone) value of 50%, an (ethanol + acetone)/(ethanol + acetone + water) value of 75%, a processing time of 60 min and a number of PFI beating revolutions of 1750. The yield was 32% lower under these conditions, however. A comparison of the results provided by ethanol, acetone and ethanol-acetone pulping revealed that the second and third process-which provided an increased yield were the best choices. On the other hand, if the pulp is to be refined, ethanol pulping is the process of choice.
Haenggi, Matthias; Ypparila-Wolters, Heidi; Hauser, Kathrin; Caviezel, Claudio; Takala, Jukka; Korhonen, Ilkka; Jakob, Stephan M
2009-01-01
We studied intra-individual and inter-individual variability of two online sedation monitors, BIS and Entropy, in volunteers under sedation. Ten healthy volunteers were sedated in a stepwise manner with doses of either midazolam and remifentanil or dexmedetomidine and remifentanil. One week later the procedure was repeated with the remaining drug combination. The doses were adjusted to achieve three different sedation levels (Ramsay Scores 2, 3 and 4) and controlled by a computer-driven drug-delivery system to maintain stable plasma concentrations of the drugs. At each level of sedation, BIS and Entropy (response entropy and state entropy) values were recorded for 20 minutes. Baseline recordings were obtained before the sedative medications were administered. Both inter-individual and intra-individual variability increased as the sedation level deepened. Entropy values showed greater variability than BIS(R) values, and the variability was greater during dexmedetomidine/remifentanil sedation than during midazolam/remifentanil sedation. The large intra-individual and inter-individual variability of BIS and Entropy values in sedated volunteers makes the determination of sedation levels by processed electroencephalogram (EEG) variables impossible. Reports in the literature which draw conclusions based on processed EEG variables obtained from sedated intensive care unit (ICU) patients may be inaccurate due to this variability. clinicaltrials.gov Nr. NCT00641563.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2015-01-01
Because humans and animals encounter various situations, the ability to adaptively decide upon responses to any situation is essential. To date, however, decision processes and the underlying neural substrates have been investigated under specific conditions; thus, little is known about how various conditions influence one another in these processes. In this study, we designed a binary choice task with variable- and fixed-reward conditions and investigated neural activities of the prelimbic cortex and dorsomedial striatum in rats. Variable- and fixed-reward conditions induced flexible and inflexible behaviors, respectively; one of the two conditions was randomly assigned in each trial for testing the possibility of condition interference. Rats were successfully conditioned such that they could find the better reward holes of variable-reward-condition and fixed-reward-condition trials. A learning interference model, which updated expected rewards (i.e., values) used in variable-reward-condition trials on the basis of combined experiences of both conditions, better fit choice behaviors than conventional models which updated values in each condition independently. Thus, although rats distinguished the trial condition, they updated values in a condition-interference manner. Our electrophysiological study suggests that this interfering value-updating is mediated by the prelimbic cortex and dorsomedial striatum. First, some prelimbic cortical and striatal neurons represented the action-reward associations irrespective of trial conditions. Second, the striatal neurons kept tracking the values of variable-reward condition even in fixed-reward-condition trials, such that values were possibly interferingly updated even in the fixed-reward condition.
Hierarchical Synthesis of Coastal Ecosystem Health Indicators at Karimunjawa National Marine Park
NASA Astrophysics Data System (ADS)
Danu Prasetya, Johan; Ambariyanto; Supriharyono; Purwanti, Frida
2018-02-01
The coastal ecosystem of Karimunjawa National Marine Park (KNMP) is facing various pressures, including from human activity. Monitoring the health condition of coastal ecosystems periodically is needed as an evaluation of the ecosystem condition. Systematic and consistent indicators are needed in monitoring of coastal ecosystem health. This paper presents hierarchical synthesis of coastal ecosystem health indicators using Analytic Hierarchy Process (AHP) method. Hierarchical synthesis is obtained from process of weighting by paired comparison based on expert judgments. The variables of coastal ecosystem health indicators in this synthesis consist of 3 level of variable, i.e. main variable, sub-variable and operational variable. As a result of assessment, coastal ecosystem health indicators consist of 3 main variables, i.e. State of Ecosystem, Pressure and Management. Main variables State of Ecosystem and Management obtain the same value i.e. 0.400, while Pressure value was 0.200. Each main variable consist of several sub-variable, i.e. coral reef, reef fish, mangrove and seagrass for State of Ecosystem; fisheries and marine tourism activity for Pressure; planning and regulation, institutional and also infrastructure and financing for Management. The highest value of sub-variable of main variable State of Ecosystem, Pressure and Management were coral reef (0.186); marine tourism pressure (0.133) and institutional (0.171), respectively. The highest value of operational variable of main variable State of Ecosystem, Pressure and Management were percent of coral cover (0.058), marine tourism pressure (0.133) and presence of zonation plan, regulation also socialization of monitoring program (0.53), respectively. Potential pressure from marine tourism activity is the variable that most affect the health of the ecosystem. The results of this research suggest that there is a need to develop stronger conservation strategies to facing with pressures from marine tourism activities.
Derivation of sequential, real-time, process-control programs
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Schneider, Fred B.; Budhiraja, Navin
1991-01-01
The use of weakest-precondition predicate transformers in the derivation of sequential, process-control software is discussed. Only one extension to Dijkstra's calculus for deriving ordinary sequential programs was found to be necessary: function-valued auxiliary variables. These auxiliary variables are needed for reasoning about states of a physical process that exists during program transitions.
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...
2018-02-10
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Ruby Leung, L.; Zhao, Chun
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue
2010-03-01
To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.
First-Pass Processing of Value Cues in the Ventral Visual Pathway.
Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E
2018-02-19
Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hiromori, Tomohito
2009-01-01
The purpose of this study is to examine a process model of L2 learners' motivation. To investigate the overall process of motivation, the motivation of 148 university students was analyzed. Data were collected on three variables from the pre-decisional phase of motivation (i.e., value, expectancy, and intention) and four variables from the…
Analysis of the semi-permanent house in Merauke city in terms of aesthetic value in architecture
NASA Astrophysics Data System (ADS)
Topan, Anton; Octavia, Sari; Soleman, Henry
2018-05-01
Semi permanent houses are also used called “Rumah Kancingan” is the houses that generally exist in the Merauke city. Called semi permanent because the main structure use is woods even if the walls uses bricks. This research tries to analyze more about Semi permanent house in terms of aesthethics value. This research is a qualitative research with data collection techniques using questionnaire method and direct observation field and study of literature. The result of questionnaire data collection then processed using SPSS to get the influence of independent variable against the dependent variable and found that color, ornament, shape of the door-window and shape of roof (independent) gives 97,1% influence to the aesthetics of the Semi permanent house and based on the output coefficient SPSS obtained that the dependent variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable. For variables of semi permanent and wooden structure gives an effect of 98,6% to aesthetics and based on the result of SPSS coefficient it is found that free variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable.
Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo
2011-08-01
Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®
Process-based modelling of the nutritive value of forages: a review
USDA-ARS?s Scientific Manuscript database
Modelling sward nutritional value (NV) is of particular importance to understand the interactions between grasslands, livestock production, environment and climate-related impacts. Variables describing nutritive value vary significantly between ruminant production systems, but two types are commonly...
Martínez-Martínez, Víctor; Baladrón, Carlos; Gomez-Gil, Jaime; Ruiz-Ruiz, Gonzalo; Navas-Gracia, Luis M; Aguiar, Javier M; Carro, Belén
2012-10-17
This paper presents a system based on an Artificial Neural Network (ANN) for estimating and predicting environmental variables related to tobacco drying processes. This system has been validated with temperature and relative humidity data obtained from a real tobacco dryer with a Wireless Sensor Network (WSN). A fitting ANN was used to estimate temperature and relative humidity in different locations inside the tobacco dryer and to predict them with different time horizons. An error under 2% can be achieved when estimating temperature as a function of temperature and relative humidity in other locations. Moreover, an error around 1.5 times lower than that obtained with an interpolation method can be achieved when predicting the temperature inside the tobacco mass as a function of its present and past values with time horizons over 150 minutes. These results show that the tobacco drying process can be improved taking into account the predicted future value of the monitored variables and the estimated actual value of other variables using a fitting ANN as proposed.
Martínez-Martínez, Víctor; Baladrón, Carlos; Gomez-Gil, Jaime; Ruiz-Ruiz, Gonzalo; Navas-Gracia, Luis M.; Aguiar, Javier M.; Carro, Belén
2012-01-01
This paper presents a system based on an Artificial Neural Network (ANN) for estimating and predicting environmental variables related to tobacco drying processes. This system has been validated with temperature and relative humidity data obtained from a real tobacco dryer with a Wireless Sensor Network (WSN). A fitting ANN was used to estimate temperature and relative humidity in different locations inside the tobacco dryer and to predict them with different time horizons. An error under 2% can be achieved when estimating temperature as a function of temperature and relative humidity in other locations. Moreover, an error around 1.5 times lower than that obtained with an interpolation method can be achieved when predicting the temperature inside the tobacco mass as a function of its present and past values with time horizons over 150 minutes. These results show that the tobacco drying process can be improved taking into account the predicted future value of the monitored variables and the estimated actual value of other variables using a fitting ANN as proposed. PMID:23202032
ERIC Educational Resources Information Center
Jung, Jae Yup
2013-01-01
This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
NASA Astrophysics Data System (ADS)
Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire
2015-09-01
We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.
Liang, Shih-Hsiung; Walther, Bruno Andreas; Shieh, Bao-Sen
2017-01-01
Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies.
Liang, Shih-Hsiung; Walther, Bruno Andreas
2017-01-01
Background Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. Methods We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. Results The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Discussion Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies. PMID:28316893
Cavanagh, Sean E; Wallis, Joni D; Kennerley, Steven W; Hunt, Laurence T
2016-01-01
Correlates of value are routinely observed in the prefrontal cortex (PFC) during reward-guided decision making. In previous work (Hunt et al., 2015), we argued that PFC correlates of chosen value are a consequence of varying rates of a dynamical evidence accumulation process. Yet within PFC, there is substantial variability in chosen value correlates across individual neurons. Here we show that this variability is explained by neurons having different temporal receptive fields of integration, indexed by examining neuronal spike rate autocorrelation structure whilst at rest. We find that neurons with protracted resting temporal receptive fields exhibit stronger chosen value correlates during choice. Within orbitofrontal cortex, these neurons also sustain coding of chosen value from choice through the delivery of reward, providing a potential neural mechanism for maintaining predictions and updating stored values during learning. These findings reveal that within PFC, variability in temporal specialisation across neurons predicts involvement in specific decision-making computations. DOI: http://dx.doi.org/10.7554/eLife.18937.001 PMID:27705742
A framework for monitoring social process and outcomes in environmental programs.
Chapman, Sarah
2014-12-01
When environmental programs frame their activities as being in the service of human wellbeing, social variables need to be integrated into monitoring and evaluation (M&E) frameworks. This article draws upon ecosystem services theory to develop a framework to guide the M&E of collaborative environmental programs with anticipated social benefits. The framework has six components: program need, program activities, pathway process variables, moderating process variables, outcomes, and program value. Needs are defined in terms of ecosystem services, as well as other human needs that must be addressed to achieve outcomes. The pathway variable relates to the development of natural resource governance capacity in the target community. Moderating processes can be externalities such as the inherent capacity of the natural system to service ecosystem needs, local demand for natural resources, policy or socio-economic drivers. Internal program-specific processes relate to program service delivery, targeting and participant responsiveness. Ecological outcomes are expressed in terms of changes in landscape structure and function, which in turn influence ecosystem service provision. Social benefits derived from the program are expressed in terms of the value of the eco-social service to user-specified goals. The article provides suggestions from the literature for identifying indicators and measures for components and component variables, and concludes with an example of how the framework was used to inform the M&E of an adaptive co-management program in western Kenya. Copyright © 2014 Elsevier Ltd. All rights reserved.
Variables affecting the quantitation of CD22 in neoplastic B cells.
Jasper, Gregory A; Arun, Indu; Venzon, David; Kreitman, Robert J; Wayne, Alan S; Yuan, Constance M; Marti, Gerald E; Stetler-Stevenson, Maryalice
2011-03-01
Quantitative flow cytometry (QFCM) is being applied in the clinical flow cytometry laboratory for diagnosis, prognosis, and assessment of patients receiving antibody-based therapy. ABC values and the effect of technical variables on CD22 quantitation in acute lymphoblastic leukemia (ALL), chronic lymphocytic leukemia (CLL), mantle cell lymphoma (MCL), follicular lymphoma (FCL), hairy cell leukemia (HCL) and normal B cells were studied. The QuantiBrite System® was used to determine the level of CD22 expression (mean antibody bound per cell, ABC) by malignant and normal B cells. The intra-assay variability, number of cells required for precision, effect of delayed processing as well as shipment of peripheral blood specimens (delayed processing and exposure to noncontrolled environments), and the effect of paraformaldehyde fixation on assay results were studied. The QuantiBRITE method of measuring CD22 ABC is precise (median CV 1.6%, 95% confidence interval, 1.2-2.3%) but a threshold of 250 malignant cells is required for reliable CD22 ABC values. Delayed processing and overnight shipment of specimens resulted in significantly different ABC values whereas fixation for up to 12 h had no significant effect. ABC measurements determined that CD22 expression is lower than normal in ALL, CLL, FCL, and MCL but higher than normal in HCL. CD22 expression was atypical in the hematolymphoid malignancies studied and may have diagnostic utility. Technical variables such as cell number analyzed and delayed processing or overnight shipment of specimens impact significantly on the measurement of antigen expression by QFCM in the clinical laboratory. Published 2010 Wiley-Liss, Inc.
A heuristic constraint programmed planner for deep space exploration problems
NASA Astrophysics Data System (ADS)
Jiang, Xiao; Xu, Rui; Cui, Pingyuan
2017-10-01
In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
On the Design of a Fuzzy Logic-Based Control System for Freeze-Drying Processes.
Fissore, Davide
2016-12-01
This article is focused on the design of a fuzzy logic-based control system to optimize a drug freeze-drying process. The goal of the system is to keep product temperature as close as possible to the threshold value of the formulation being processed, without trespassing it, in such a way that product quality is not jeopardized and the sublimation flux is maximized. The method involves the measurement of product temperature and a set of rules that have been obtained through process simulation with the goal to obtain a unique set of rules for products with very different characteristics. Input variables are the difference between the temperature of the product and the threshold value, the difference between the temperature of the heating fluid and that of the product, and the rate of change of product temperature. The output variables are the variation of the temperature of the heating fluid and the pressure in the drying chamber. The effect of the starting value of the input variables and of the control interval has been investigated, thus resulting in the optimal configuration of the control system. Experimental investigation carried out in a pilot-scale freeze-dryer has been carried out to validate the proposed system. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo
2014-05-19
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.
NASA Astrophysics Data System (ADS)
Chung, T. W.; Chen, C. K.; Hsu, S. H.
2017-11-01
Protein concentration process using filter membrane has a significant advantage on energy saving compared to the traditional drying processes. However, fouling on large membrane area and frequent membrane cleaning will increase the energy consumption and operation cost for the protein concentration process with filter membrane. In this study, the membrane filtration for protein concentration will be conducted and compared with the recent protein concentration technology. The analysis of operating factors for protein concentration process using filter membrane was discussed. The separation mechanism of membrane filtration was developed according to the size difference between the pore of membrane and the particle of filter material. The Darcy’s Law was applied to discuss the interaction on flux, TMP (transmembrane pressure) and resistance in this study. The effect of membrane pore size, pH value and TMP on the steady-state flux (Jst) and protein rejection (R) were studied. It is observed that the Jst increases with decreasing membrane pore size, the Jst increases with increasing TMP, and R increased with decreasing solution pH value. Compare to other variables, the pH value is the most significant variable for separation between protein and water.
2013-01-01
Background Vaccine protection investigation includes three processes: vaccination, pathogen challenge, and vaccine protection efficacy assessment. Many variables can affect the results of vaccine protection. Brucella, a genus of facultative intracellular bacteria, is the etiologic agent of brucellosis in humans and multiple animal species. Extensive research has been conducted in developing effective live attenuated Brucella vaccines. We hypothesized that some variables play a more important role than others in determining vaccine protective efficacy. Using Brucella vaccines and vaccine candidates as study models, this hypothesis was tested by meta-analysis of Brucella vaccine studies reported in the literature. Results Nineteen variables related to vaccine-induced protection of mice against infection with virulent brucellae were selected based on modeling investigation of the vaccine protection processes. The variable "vaccine protection efficacy" was set as a dependent variable while the other eighteen were set as independent variables. Discrete or continuous values were collected from papers for each variable of each data set. In total, 401 experimental groups were manually annotated from 74 peer-reviewed publications containing mouse protection data for live attenuated Brucella vaccines or vaccine candidates. Our ANOVA analysis indicated that nine variables contributed significantly (P-value < 0.05) to Brucella vaccine protection efficacy: vaccine strain, vaccination host (mouse) strain, vaccination dose, vaccination route, challenge pathogen strain, challenge route, challenge-killing interval, colony forming units (CFUs) in mouse spleen, and CFU reduction compared to control group. The other 10 variables (e.g., mouse age, vaccination-challenge interval, and challenge dose) were not found to be statistically significant (P-value > 0.05). The protection level of RB51 was sacrificed when the values of several variables (e.g., vaccination route, vaccine viability, and challenge pathogen strain) change. It is suggestive that it is difficult to protect against aerosol challenge. Somewhat counter-intuitively, our results indicate that intraperitoneal and subcutaneous vaccinations are much more effective to protect against aerosol Brucella challenge than intranasal vaccination. Conclusions Literature meta-analysis identified variables that significantly contribute to Brucella vaccine protection efficacy. The results obtained provide critical information for rational vaccine study design. Literature meta-analysis is generic and can be applied to analyze variables critical for vaccine protection against other infectious diseases. PMID:23735014
Fisgativa, Henry; Tremier, Anne; Dabert, Patrick
2016-04-01
In order to determine the variability of food waste (FW) characteristics and the influence of these variable values on the anaerobic digestion (AD) process, FW characteristics from 70 papers were compiled and analysed statistically. Results indicated that FW characteristics values are effectively very variable and that 24% of these variations may be explained by the geographical origin, the type of collection source and the season of the collection. Considering the whole range of values for physicochemical characteristics (especially volatile solids (VS), chemical oxygen demand (COD) and biomethane potential (BMP)), FW show good potential for AD treatment. However, the high carbohydrates contents (36.4%VS) and the low pH (5.1) might cause inhibitions by the rapid acidification of the digesters. As regards the variation of FW characteristics, FW categories were proposed. Moreover, the adequacy of FW characteristics with AD treatment was discussed. Four FW categories were identified with critical characteristics values for AD performance: (1) the high dry matter (DM) and total ammonia nitrogen (TAN) content of FW collected with green waste, (2) the high cellulose (CEL) content of FW from the organic fraction of municipal solid waste, (3) the low carbon-to-nitrogen (C/N) ratio of FW collected during summer, (4) the high value of TAN and Na of FW from Asia. For these cases, an aerobic pre-treatment or a corrective treatment seems to be advised to avoid instabilities along the digestion. Finally, the results of this review-paper provide a data basis of values for FW characteristics that could be used for AD process design and environmental assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Viscoplasticity: A thermodynamic formulation
NASA Technical Reports Server (NTRS)
Freed, A. D.; Chaboche, J. L.
1989-01-01
A thermodynamic foundation using the concept of internal state variables is given for a general theory of viscoplasticity, as it applies to initially isotropic materials. Three fundamental internal state variables are admitted. They are: a tensor valued back stress for kinematic effects, and the scalar valued drag and yield strengths for isotropic effects. All three are considered to phenomenologically evolve according to competitive processes between strain hardening, strain induced dynamic recovery, and time induced static recovery. Within this phenomenological framework, a thermodynamically admissible set of evolution equations is put forth. This theory allows each of the three fundamental internal variables to be composed as a sum of independently evolving constituents.
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
Landau, Sabine; Emsley, Richard; Dunn, Graham
2018-06-01
Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.
Gabriel, Florence C.; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference. PMID:25249995
Zhang, Li; Fang, Qiaochu; Gabriel, Florence C; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
The added value of time-variable microgravimetry to the understanding of how volcanoes work
Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel
2017-01-01
During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castelluccio, Gustavo M.; McDowell, David L.
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Castelluccio, Gustavo M.; McDowell, David L.
2015-05-22
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
ERIC Educational Resources Information Center
Sanchez, Erin N.; Aujla, Imogen J.; Nordin-Bates, Sanna
2013-01-01
This study is a qualitative enquiry into cultural background variables--social support, values, race/ethnicity and economic means--in the process of dance talent development. Seven urban dance students in pre-vocational training, aged 15-19, participated in semi-structured interviews. Interviews were inductively analysed using QSR International…
Lucarini, Valerio; Fraedrich, Klaus
2009-08-01
Starting from the classical Saltzman two-dimensional convection equations, we derive via a severe spectral truncation a minimal 10 ODE system which includes the thermal effect of viscous dissipation. Neglecting this process leads to a dynamical system which includes a decoupled generalized Lorenz system. The consideration of this process breaks an important symmetry and couples the dynamics of fast and slow variables, with the ensuing modifications to the structural properties of the attractor and of the spectral features. When the relevant nondimensional number (Eckert number Ec) is different from zero, an additional time scale of O(Ec(-1)) is introduced in the system, as shown with standard multiscale analysis and made clear by several numerical evidences. Moreover, the system is ergodic and hyperbolic, the slow variables feature long-term memory with 1/f(3/2) power spectra, and the fast variables feature amplitude modulation. Increasing the strength of the thermal-viscous feedback has a stabilizing effect, as both the metric entropy and the Kaplan-Yorke attractor dimension decrease monotonically with Ec. The analyzed system features very rich dynamics: it overcomes some of the limitations of the Lorenz system and might have prototypical value in relevant processes in complex systems dynamics, such as the interaction between slow and fast variables, the presence of long-term memory, and the associated extreme value statistics. This analysis shows how neglecting the coupling of slow and fast variables only on the basis of scale analysis can be catastrophic. In fact, this leads to spurious invariances that affect essential dynamical properties (ergodicity, hyperbolicity) and that cause the model losing ability in describing intrinsically multiscale processes.
Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo
2014-01-01
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
NASA Technical Reports Server (NTRS)
Hough, R. L.; Richmond, R. D.
1971-01-01
Research was conducted to develop large diameter carbon monofilament, containing 25 to 35 mole % element boron, in the 2.0 to 10.0 mil diameter range using the chemical vapor deposition process. The objective of the program was to gain an understanding of the critical process variables and their effect on fiber properties. Synthesis equipment was modified to allow these variables to be studied. Improved control of synthesis variables permitted reduction in scatter of properties of the monofilaments. Monofilaments have been synthesized in the 3.0 to nearly 6.0 mil diameter range having measured values up to 552,000 psi for ultimate tensile strength and up to 30 million psi for elastic modulus.
Rebuilding DEMATEL threshold value: an example of a food and beverage information system.
Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin
2016-01-01
This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.
NASA Astrophysics Data System (ADS)
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Results from the VALUE perfect predictor experiment: process-based evaluation
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit
2016-04-01
Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.
NASA Astrophysics Data System (ADS)
Sutton, A.; Sabine, C. L.; Feely, R. A.
2016-02-01
One of the major challenges to assessing the impact of ocean acidification on marine life is the need to better understand the magnitude of long-term change in the context of natural variability. High-frequency moored observations can be highly effective in defining interannual, seasonal, and subseasonal variability at key locations. Here we present monthly aragonite saturation state (Ωaragonite) climatology for 15 open ocean, coastal, and coral reef locations using 3-hourly moored observations of surface seawater pCO2 and pH collected together since as early as 2009. We then use these present day surface mooring observations to estimate pre-industrial variability at each location and compare these results to previous modeling studies addressing global-scale variability and change. Our observations suggest that open oceans sites, especially in the subtropics, are experiencing Ωaragonite values throughout much of the year which are outside the range of pre-industrial values. In coastal and coral reef ecosystems, which have higher natural variability, seasonal patterns where present day Ωaragonite values exceeding pre-industrial bounds are emerging with some sites exhibiting subseasonal conditions approaching Ωaragonite = 1. Linking these seasonal patterns in carbonate chemistry to biological processes in these regions is critical to identify when and where marine life may encounter Ωaragonite values outside the conditions to which they have adapted.
Chameleon Effect, the Range of Values Hypothesis and Reproducing the EPR-Bohm Correlations
NASA Astrophysics Data System (ADS)
Accardi, Luigi; Khrennikov, Andrei
2007-02-01
We present a detailed analysis of assumptions that J. Bell used to show that local realism contradicts QM. We find that Bell's viewpoint on realism is nonphysical, because it implicitly assume that observed physical variables coincides with ontic variables (i.e., these variables before measurement). The real physical process of measurement is a process of dynamical interaction between a system and a measurement device. Therefore one should check the adequacy of QM not to "Bell's realism," but to adaptive realism (chameleon realism). Dropping Bell's assumption we are able to construct a natural representation of the EPR-Bohm correlations in the local (adaptive) realistic approach.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Sentence comprehension in agrammatic aphasia: history and variability to clinical implications.
Johnson, Danielle; Cannizzaro, Michael S
2009-01-01
Individuals with Broca's aphasia often present with deficits in their ability to comprehend non-canonical sentences. This has been contrastingly characterized as a systematic loss of specific grammatical abilities or as individual variability in the dynamics between processing load and resource availability. The present study investigated sentence level comprehension in participants with Broca's aphasia in an attempt to integrate these contrasting views into a clinically useful process. Two participants diagnosed with Broca's aphasia were assessed using a sentence-to-picture matching paradigm and a truth-value judgement task, across sentence constructions thought to be problematic for this population. The data demonstrate markedly different patterns of performance between participants, as well as variability within participants (e.g. by sentence type). These findings support the notion of individual performance variability in persons with aphasia. Syntactic theory was instructive for assessing sentence level comprehension, leading to a clinically relevant process of identifying treatment targets considering both performance variability and syntactic complexity for this population.
Modeling of laser transmission contour welding process using FEA and DoE
NASA Astrophysics Data System (ADS)
Acherjee, Bappa; Kuar, Arunanshu S.; Mitra, Souren; Misra, Dipten
2012-07-01
In this research, a systematic investigation on laser transmission contour welding process is carried out using finite element analysis (FEA) and design of experiments (DoE) techniques. First of all, a three-dimensional thermal model is developed to simulate the laser transmission contour welding process with a moving heat source. The commercial finite element code ANSYS® multi-physics is used to obtain the numerical results by implementing a volumetric Gaussian heat source, and combined convection-radiation boundary conditions. Design of experiments together with regression analysis is then employed to plan the experiments and to develop mathematical models based on simulation results. Four key process parameters, namely power, welding speed, beam diameter, and carbon black content in absorbing polymer, are considered as independent variables, while maximum temperature at weld interface, weld width, and weld depths in transparent and absorbing polymers are considered as dependent variables. Sensitivity analysis is performed to determine how different values of an independent variable affect a particular dependent variable.
Fuzzy Rule Suram for Wood Drying
NASA Astrophysics Data System (ADS)
Situmorang, Zakarias
2017-12-01
Implemented of fuzzy rule must used a look-up table as defuzzification analysis. Look-up table is the actuator plant to doing the value of fuzzification. Rule suram based of fuzzy logic with variables of weather is temperature ambient and humidity ambient, it implemented for wood drying process. The membership function of variable of state represented in error value and change error with typical map of triangle and map of trapezium. Result of analysis to reach 4 fuzzy rule in 81 conditions to control the output system can be constructed in a number of way of weather and conditions of air. It used to minimum of the consumption of electric energy by heater. One cycle of schedule drying is a serial of condition of chamber to process as use as a wood species.
Hot mill process parameters impacting on hot mill tertiary scale formation
NASA Astrophysics Data System (ADS)
Kennedy, Jonathan Ian
For high end steel applications surface quality is paramount to deliver a suitable product. A major cause of surface quality issues is from the formation of tertiary scale. The scale formation depends on numerous factors such as thermo-mechanical processing routes, chemical composition, thickness and rolls used. This thesis utilises a collection of data mining techniques to better understand the influence of Hot Mill process parameters on scale formation at Port Talbot Hot Strip Mill in South Wales. The dataset to which these data mining techniques were applied was carefully chosen to reduce process variation. There are several main factors that were considered to minimise this variability including time period, grade and gauge investigated. The following data mining techniques were chosen to investigate this dataset: Partial Least Squares (PLS); Logit Analysis; Principle Component Analysis (PCA); Multinomial Logistical Regression (MLR); Adaptive Neuro Inference Fuzzy Systems (ANFIS). The analysis indicated that the most significant variable for scale formation is the temperature entering the finishing mill. If the temperature is controlled on entering the finishing mill scale will not be formed. Values greater than 1070 °C for the average Roughing Mill and above 1050 °C for the average Crop Shear temperature are considered high, with values greater than this increasing the chance of scale formation. As the temperature increases more scale suppression measures are required to limit scale formation, with high temperatures more likely to generate a greater amount of scale even with fully functional scale suppression systems in place. Chemistry is also a significant factor in scale formation, with Phosphorus being the most significant of the chemistry variables. It is recommended that the chemistry specification for Phosphorus be limited to a maximum value of 0.015 % rather than 0.020 % to limit scale formation. Slabs with higher values should be treated with particular care when being processed through the Hot Mill to limit scale formation.
Kim, Matthew H.; Marulis, Loren M.; Grammer, Jennie K.; Morrison, Frederick J.; Gehring, William J.
2016-01-01
Motivational beliefs and values influence how children approach challenging activities. The present study explores motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two ERP components, the error-related negativity (ERN) and error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 four- to six-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, while stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. PMID:27898304
Aromatic hydrocarbons from the Middle Jurassic fossil wood of the Polish Jura
NASA Astrophysics Data System (ADS)
Smolarek, Justyna; Marynowski, Leszek
2013-09-01
Aromatic hydrocarbons are present in the fossil wood samples in relatively small amounts. In almost all of the tested samples the dominating aromatic hydrocarbon is perylene and its methyl and dimethyl derivatives. The most important biomarkers present in the aromatic fraction are dehydroabietane, siomonellite and retene, compounds characteristic for conifers. The distribution of discussed compounds is highly variable due to such early diagenetic processes affecting the wood as oxidation and the activity of microorganisms. MPI1 parameter values (methylphenanthrene index) for the majority of the samples are in the range of 0.1 to 0.5, which results in the highly variable values of Rc (converted value of vitrinite reflectance) ranging from 0.45 to 0.70%. Such values suggest that MPI1 parameter is not useful as maturity parameter in case of Middle Jurassic ore-bearing clays, even if measured strictly on terrestrial organic matter (OM). As a result of weathering processes (oxidation) the distribution of aromatic hydrocarbons changes. In the oxidized samples the amount of aromatic hydrocarbons, both polycyclic as well as aromatic biomarkers decreases.
1,500 Year Periodicity in Central Texas Moisture Source Variability Reconstructed from Speleothems
NASA Astrophysics Data System (ADS)
Wong, C. I.; James, E. W.; Silver, M. M.; Banner, J. L.; Musgrove, M.
2014-12-01
Delineating the climate processes governing precipitation variability in drought-prone Texas is critical for predicting and mitigating climate change effects, and requires the reconstruction of past climate beyond the instrumental record. Presently, there are few high-resolution Holocene climate records for this region, which limits the assessment of precipitation variability during a relatively stable climatic interval that comprises the closest analogue to the modern climate state. To address this, we present speleothem growth rate and δ18O records from two central Texas caves that span the mid to late Holocene, and assess hypotheses about the climate processes that can account for similarity in the timing and periodicity of variability with other regional and global records. A key finding is the independent variation of speleothem growth rate and δ18O values, suggesting the decoupling of moisture amount and source. This decoupling likely occurs because i) the often direct relation between speleothem growth rate and moisture availability is complicated by changes in the overlying ecosystem that affect subsurface CO2 production, and ii) speleothem δ18O variations reflect changes in moisture source (i.e., proportion of Pacific- vs. Gulf of Mexico-derived moisture) that appear not to be linked to moisture amount. Furthermore, we document a 1,500-year periodicity in δ18O values that is consistent with variability in the percent of hematite-stained grains in North Atlantic sediments, North Pacific SSTs, and El Nino events preserved in an Ecuadorian lake. Previous modeling experiments and analysis of observational data delineate the coupled atmospheric-ocean processes that can account for the coincidence of such variability in climate archives across the northern hemisphere. Reduction of the thermohaline circulation results in North Atlantic cooling, which translates to cooler North Pacific SSTs. The resulting reduction of the meridional SST gradient in the Pacific weakens the air-sea coupling that modulates ENSO activity, resulting in faster growth of interannual anomalies and larger mature El Niño relative to La Niña events. The asymmetrically enhanced ENSO variability can account for a greater portion of Pacific-derived moisture reflected by speleothem δ18O values.
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1990-01-01
How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.
NASA Astrophysics Data System (ADS)
Chouaib, Wafa; Caldwell, Peter V.; Alila, Younes
2018-04-01
This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the Sacramento model (SAC-SMA) to simulate soil moisture and flow components FDCs. The catchments classification based on storm characteristics pointed to the effect of catchments landscape properties on the precipitation variability and consequently on the FDC shapes. The landscape properties effect was pronounce such that low value of the slope of FDC (SFDC)-hinting at limited flow variability-were present in regions of high precipitation variability. Whereas, in regions with low precipitation variability the SFDCs were of larger values. The topographic index distribution, at the catchment scale, indicated that saturation excess overland flow mitigated the flow variability under conditions of low elevations with large soil moisture storage capacity and high infiltration rates. The SFDCs increased due to the predominant subsurface stormflow in catchments at high elevations with limited soil moisture storage capacity and low infiltration rates. Our analyses also highlighted the major role of soil infiltration rates on the FDC despite the impact of the predominant runoff generation mechanism and catchment elevation. In conditions of slow infiltration rates in soils of large moisture storage capacity (at low elevations) and predominant saturation excess, the SFDCs were of larger values. On the other hand, the SFDCs decreased in catchments of prevalent subsurface stormflow and poorly drained soils of small soil moisture storage capacity. The analysis of the flow components FDCs demonstrated that the interflow contribution to the response was the higher in catchments with large value of slope of the FDC. The surface flow FDC was the most affected by the precipitation as it tracked the precipitation duration curve (PDC). In catchments with low SFDCs, this became less applicable as surface flow FDC diverged from PDC at the upper tail (> 40% of the flow percentile). The interflow and baseflow FDCs illustrated most the filtering effect on the precipitation. The process understanding we achieved in this study is key for flow simulation and assessment in addition to future works focusing on process-based FDC predictions.
Buratti, C; Barbanera, M; Lascaro, E; Cotana, F
2018-03-01
The aim of the present study is to analyze the influence of independent process variables such as temperature, residence time, and heating rate on the torrefaction process of coffee chaff (CC) and spent coffee grounds (SCGs). Response surface methodology and a three-factor and three-level Box-Behnken design were used in order to evaluate the effects of the process variables on the weight loss (W L ) and the Higher Heating Value (HHV) of the torrefied materials. Results showed that the effects of the three factors on both responses were sequenced as follows: temperature>residence time>heating rate. Data obtained from the experiments were analyzed by analysis of variance (ANOVA) and fitted to second-order polynomial models by using multiple regression analysis. Predictive models were determined, able to obtain satisfactory fittings of the experimental data, with coefficient of determination (R 2 ) values higher than 0.95. An optimization study using Derringer's desired function methodology was also carried out and the optimal torrefaction conditions were found: temperature 271.7°C, residence time 20min, heating rate 5°C/min for CC and 256.0°C, 20min, 25°C/min for SCGs. The experimental values closely agree with the corresponding predicted values. Copyright © 2017 Elsevier Ltd. All rights reserved.
Le, Lena; Bagstad, Kenneth J.; Cook, Philip S.; Leong, Kirsten M.; DiDonato, Eva
2015-01-01
Gaining public support for management actions is important to the success of public land management agencies’ efforts to protect threatened and endangered species. This is especially relevant at national parks, where managers balance two aspects of their conservation mission: to protect resources and to provide for public enjoyment. This study examined variables potentially associated with support for management actions at Cape Lookout National Seashore, a unit of the National Park Service. Two visitor surveys were conducted at the park at different seasons, and a resident survey was conducted for households in Carteret County, North Carolina, where the park is located. The goal of the project was to provide park managers with information that may help with the development of communication strategies concerning the park’s conservation mission. These communication strategies may help to facilitate mutual understanding and garner public support for management actions. Several variables were examined as potential determinants that park managers ought to consider when developing communication strategies. Multinomial logistic regression was applied to examine the relationships between these variables and the likelihood of support for or opposition to management actions. The variables examined included perceived shared values of park resources, general environmental attitudes, level of familiarity with park resources and regulations, knowledge about threatened and endangered species, level of trust in the decision-making process, and perceived shared values with park management. In addition, demographic variables such as income level, respondent age, residency status, and visitor type were also used. The results show that perceived values of threatened and endangered species, trust in park managers and the decision-making process, and perceived share values with park managers were among the strongest indicators of support for management actions. Different user groups also exhibited different levels of support, with groups engaged in specialized recreation activities (fishers) being the most likely to oppose management actions. While our findings are not surprising, they corroborate past research that has shown an effective communications strategy should be customized to target different audiences. In addition, management should focus on developing long-term relationships that build trust in and foster credibility of decision-making processes.
Value: A Framework for Radiation Oncology
Teckie, Sewit; McCloskey, Susan A.; Steinberg, Michael L.
2014-01-01
In the current health care system, high costs without proportional improvements in quality or outcome have prompted widespread calls for change in how we deliver and pay for care. Value-based health care delivery models have been proposed. Multiple impediments exist to achieving value, including misaligned patient and provider incentives, information asymmetries, convoluted and opaque cost structures, and cultural attitudes toward cancer treatment. Radiation oncology as a specialty has recently become a focus of the value discussion. Escalating costs secondary to rapidly evolving technologies, safety breaches, and variable, nonstandardized structures and processes of delivering care have garnered attention. In response, we present a framework for the value discussion in radiation oncology and identify approaches for attaining value, including economic and structural models, process improvements, outcome measurement, and cost assessment. PMID:25113759
Removal of iron ore slimes from a highly turbid water by DAF.
Faustino, L M; Braga, A S; Sacchi, G D; Whitaker, W; Reali, M A P; Leal Filho, L S; Daniel, L A
2018-05-30
This paper addresses Dissolved Air Flotation (DAF) process variables, such as the flocculation parameters and the recycle water addition, as well as the pretreatment chemical variables (coagulation conditions), to determine the optimal values for the flotation of iron ore slimes found in a highly turbid water sample from the Gualaxo do Norte River, a tributary of the Doce River Basin in Minas Gerais, Brazil. This work was conducted using a flotatest batch laboratory-scale device to evaluate the effectiveness of DAF for cleaning the water polluted by the Samarco tailings dam leakage and determine the ability of DAF to reduce the water turbidity from 358 NTU to values below 100 NTU, aiming to comply with current legislation. The results showed that the four types of tested coagulants (PAC, ferric chloride, Tanfloc SG and Tanfloc SL) provided adequate conditions for coagulation, flocculation and flotation (in the range of 90-99.6% turbidity reduction). Although the process variables were optimized and low residual turbidity vales were achieved, results revealed that a portion of the flocs settled at the bottom of the flotatest columns, which indicated that the turbidity results represented removal caused by a combination of flotation and sedimentation processes simultaneously.
Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas
2017-09-01
A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Students' daily emotions in the classroom: intra-individual variability and appraisal correlates.
Ahmed, Wondimu; van der Werf, Greetje; Minnaert, Alexander; Kuyper, Hans
2010-12-01
Recent literature on emotions in education has shown that competence- and value-related beliefs are important sources of students' emotions; nevertheless, the role of these antecedents in students' daily functioning in the classroom is not yet well-known. More importantly, to date we know little about intra-individual variability in students' daily emotions. The objectives of the study were (1) to examine within-student variability in emotional experiences and (2) to investigate how competence and value appraisals are associated with emotions. It was hypothesized that emotions would show substantial within-student variability and that there would be within-person associations between competence and value appraisals and the emotions. (s) The sample consisted of 120 grade 7 students (52%, girls) in 5 randomly selected classrooms in a secondary school. A diary method was used to acquire daily process variables of emotions and appraisals. Daily emotions and daily appraisals were assessed using items adapted from existing measures. Multi-level modelling was used to test the hypotheses. As predicted, the within-person variability in emotional states accounted for between 41% (for pride) and 70% (for anxiety) of total variability in the emotional states. Also as hypothesized, the appraisals were generally associated with the emotions. The within-student variability in emotions and appraisals clearly demonstrates the adaptability of students with respect to situational affordances and constraints in their everyday classroom experiences. The significant covariations between the appraisals and emotions suggest that within-student variability in emotions is systematic.
Observations and Models of Highly Intermittent Phytoplankton Distributions
Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu
2014-01-01
The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740
Hicks, C; Schinckel, A P; Forrest, J C; Akridge, J T; Wagner, J R; Chen, W
1998-09-01
Carcass and live measurements of 165 market hogs that represented seven genotypes were used to investigate genotype and sex biases associated with the prediction of fat-free lean mass (FFLM) and carcass value. Carcass value was determined as the sum of the product of weight of individual cuts and their average unit prices adjusted for slaughter and processing costs. Independent variables used in the prediction equations included carcass measurements, such as optical probe, midline ruler, ribbed carcass measurements, and electromagnetic scanning (EMSCAN), and live animal ultrasonic scanning. The effect of including subpopulation mean values of independent variables in the prediction equations for FFLM and carcass value was also investigated. Genotype and sex biases were found in equations in which midline backfat, ribbed carcass, EMSCAN, and live ultrasonic scanning were used as single technology sets of measurements. The prediction equations generally undervalued genotypes with above-average carcass value. Biases were reduced when measurements of combined technologies and mean adjusted variables were used. The FFLM and carcass value of gilts were underestimated, and they were overestimated of barrows. Equations that combined OP and EMSCAN technologies were the most accurate and least biased for both FFLM and carcass value. Equations that included carcass weight and midline last-rib backfat thickness measurements were the least accurate and most biased. Genotype and sex biases must be considered when predicting FFLM and carcass value.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J
2017-03-01
Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Six Sigma methods applied to cryogenic coolers assembly line
NASA Astrophysics Data System (ADS)
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
DOT National Transportation Integrated Search
2012-01-01
Purpose: : To determine ranking of important parameters and the overall sensitivity to values of variables in MOVES : To allow a greater understanding of the MOVES modeling process for users : Continued support by FHWA to transportation modeling comm...
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Shi, Xinyuan; Qiao, Yanjiang
2017-01-01
ABSTRACT The dissolution is one of the critical quality attributes (CQAs) of oral solid dosage forms because it relates to the absorption of drug. In this paper, the influence of raw materials, granules and process parameters on the dissolution of paracetamol tablet was analyzed using latent variable modeling methods. The variability in raw materials and granules was understood based on the principle component analysis (PCA), respectively. A multi-block partial least squares (MBPLS) model was used to determine the critical factors affecting the dissolution. The results showed that the binder amount, the post granulation time, the API content in granule, the fill depth and the punch tip separation distance were the critical factors with variable importance in the projection (VIP) values larger than 1. The importance of each unit of the whole process was also ranked using the block importance in the projection (BIP) index. It was concluded that latent variable models (LVMs) were very useful tools to extract information from the available data and improve the understanding on dissolution behavior of paracetamol tablet. The obtained LVMs were also helpful to propose the process design space and to design control strategies in the further research. PMID:27689242
Coupling of snow and permafrost processes using the Basic Modeling Interface (BMI)
NASA Astrophysics Data System (ADS)
Wang, K.; Overeem, I.; Jafarov, E. E.; Piper, M.; Stewart, S.; Clow, G. D.; Schaefer, K. M.
2017-12-01
We developed a permafrost modeling tool based by implementing the Kudryavtsev empirical permafrost active layer depth model (the so-called "Ku" component). The model is specifically set up to have a basic model interface (BMI), which enhances the potential coupling to other earth surface processes model components. This model is accessible through the Web Modeling Tool in Community Surface Dynamics Modeling System (CSDMS). The Kudryavtsev model has been applied for entire Alaska to model permafrost distribution at high spatial resolution and model predictions have been verified by Circumpolar Active Layer Monitoring (CALM) in-situ observations. The Ku component uses monthly meteorological forcing, including air temperature, snow depth, and snow density, and predicts active layer thickness (ALT) and temperature on the top of permafrost (TTOP), which are important factors in snow-hydrological processes. BMI provides an easy approach to couple the models with each other. Here, we provide a case of coupling the Ku component to snow process components, including the Snow-Degree-Day (SDD) method and Snow-Energy-Balance (SEB) method, which are existing components in the hydrological model TOPOFLOW. The work flow is (1) get variables from meteorology component, set the values to snow process component, and advance the snow process component, (2) get variables from meteorology and snow component, provide these to the Ku component and advance, (3) get variables from snow process component, set the values to meteorology component, and advance the meteorology component. The next phase is to couple the permafrost component with fully BMI-compliant TOPOFLOW hydrological model, which could provide a useful tool to investigate the permafrost hydrological effect.
Mathematical values in the processing of Chinese numeral classifiers and measure words.
Her, One-Soon; Chen, Ying-Chun; Yen, Nai-Shing
2017-01-01
A numeral classifier is required between a numeral and a noun in Chinese, which comes in two varieties, sortal classifer (C) and measural classifier (M), also known as 'classifier' and 'measure word', respectively. Cs categorize objects based on semantic attributes and Cs and Ms both denote quantity in terms of mathematical values. The aim of this study was to conduct a psycholinguistic experiment to examine whether participants process C/Ms based on their mathematical values with a semantic distance comparison task, where participants judged which of the two C/M phrases was semantically closer to the target C/M. Results showed that participants performed more accurately and faster for C/Ms with fixed values than the ones with variable values. These results demonstrated that mathematical values do play an important role in the processing of C/Ms. This study may thus shed light on the influence of the linguistic system of C/Ms on magnitude cognition.
The importance of normalisation in the construction of deprivation indices.
Gilthorpe, M S
1995-12-01
Measuring socio-economic deprivation is a major challenge usually addressed through the use of composite indices. This paper aims to clarify the technical details regarding composite index construction. The distribution of some variables, for example unemployment, varies over time, and these variations must be considered when composite indices are periodically re-evaluated. The process of normalisation is examined in detail and particular attention is paid to the importance of symmetry and skewness of the composite variable distributions. Four different solutions of the Townsend index of socioeconomic deprivation are compared to reveal the effects that differing transformation processes have on the meaning or interpretation of the final index values. Differences in the rank order and the relative separation between values are investigated. Constituent variables which have been transformed to yield a more symmetric distribution provide indices that behave similarly, irrespective of the actual transformation methods adopted. Normalisation is seen to be of less importance than the removal of variable skewness. Furthermore, the degree of success of the transformation in removing skewness has a major effect in determining the variation between the individual electoral ward scores. Constituent variables undergoing no transformation produce an index that is distorted by the inherent variable skewness, and this index is not consistent between re-evaluations, either temporally or spatially. Effective transformation of constituent variables should always be undertaken when generating a composite index. The most important aspect is the removal of variable skewness. There is no need for the transformed variables to be normally distributed, only symmetrically distributed, before standardisation. Even where additional parameter weights are to be applied, which significantly alter the final index, appropriate transformation procedures should be adopted for the purpose of consistency over time and between different geographical areas.
Amotivation and Indecision in the Decision-Making Processes Associated with University Entry
ERIC Educational Resources Information Center
Jung, Jae Yup
2013-01-01
This study developed and tested two models that examined the decision-making processes of adolescents relating to entry into university, in terms of the extent to which they may be amotivated and undecided. The models incorporated variables derived from self-determination theory, expectancy-value theory, and research on occupational indecision. A…
ERIC Educational Resources Information Center
Bergman, Lars R.; Nurmi, Jari-Erik; von Eye, Alexander A.
2012-01-01
I-states-as-objects-analysis (ISOA) is a person-oriented methodology for studying short-term developmental stability and change in patterns of variable values. ISOA is based on longitudinal data with the same set of variables measured at all measurement occasions. A key concept is the "i-state," defined as a person's pattern of variable…
Micro-Macro Duality and Space-Time Emergence
NASA Astrophysics Data System (ADS)
Ojima, Izumi
2011-03-01
The microscopic origin of space-time geometry is explained on the basis of an emergence process associated with the condensation of infinite number of microscopic quanta responsible for symmetry breakdown, which implements the basic essence of "Quantum-Classical Correspondence" and of the forcing method in physical and mathematical contexts, respectively. From this viewpoint, the space-time dependence of physical quantities arises from the "logical extension" [8] to change "constant objects" into "variable objects" by tagging the order parameters associated with the condensation onto "constant objects"; the logical direction here from a value y to a domain variable x (to materialize the basic mechanism behind the Gel'fand isomorphism) is just opposite to that common in the usual definition of a function ƒ : x⟼ƒ(x) from its domain variable x to a value y = ƒ(x).
Stochastic investigation of wind process for climatic variability identification
NASA Astrophysics Data System (ADS)
Deligiannis, Ilias; Tyrogiannis, Vassilis; Daskalou, Olympia; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris
2016-04-01
The wind process is considered one of the hydrometeorological processes that generates and drives the climate dynamics. We use a dataset comprising hourly wind records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
ERIC Educational Resources Information Center
Turgut, Sedat; Temur, Özlem Dogan
2017-01-01
In this research, the effects of using game in mathematics teaching process on academic achievement in Turkey were examined by metaanalysis method. For this purpose, the average effect size value and the average effect size values of the moderator variables (education level, the field of education, game type, implementation period and sample size)…
Clustering and variable selection in the presence of mixed variable types and missing data.
Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D
2018-05-17
We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.
Borella, Erika; de Ribaupierre, Anik; Cornoldi, Cesare; Chicherio, Christian
2013-09-01
The present study investigates intraindividual variability (IIV) in the Color-Stroop test and in a simple reaction time (SRT) task. Performance level and variability in reaction times (RTs)-quantified with different measures such as individual standard deviation (ISD) and coefficient of variation (ICV), as well as ex-Gaussian parameters (mu, sigma, tau)-were analyzed in 24 children with attention deficit/hyperactivity disorder (ADHD) and 24 typically developing children (TDC). Children with ADHD and TDC presented equivalent Color-Stroop interference effects when mean RTs were considered, and the two groups did not differ in the SRT task. Interestingly, compared to TDC, children with ADHD were more variable in their responses, showing increased ISD and ICV in the Color-Stroop interference condition and in the SRT task. Moreover, children with ADHD exhibited higher tau values-that is, more frequent abnormally long RTs-in the Color-Stroop interference condition than did the TDC, but comparable tau values in the SRT, suggesting more variable responses. These results speak in favor of a general deficit in more basic and central processes that only secondarily may affect the efficiency of inhibitory processes in children with ADHD. Overall the present findings confirm the role of IIV as a cornerstone in the ADHD cognitive profile and support the search for fine-grained analysis of performance fluctuations.
Harsh, B N; Arkfeld, E K; Mohrhauser, D A; King, D A; Wheeler, T L; Dilger, A C; Shackelford, S D; Boler, D D
2017-11-01
The objective was to determine the predictive abilities of HCW for loin, ham, and belly quality of 7,684 pigs with carcass weights ranging from 53.2 to 129.6 kg. Carcass composition, subjective loin quality, and ham face color were targeted on all carcasses, whereas in-plant instrumental loin color and belly quality were assessed on 52.0 and 47.5% of carcasses, respectively. Loin chop slice shear force (SSF), cured ham quality, and adipose iodine value (IV) were evaluated on at least 10% of the population. The slope of regression lines and coefficients of determination between HCW and quality traits were computed using PROC REG of SAS and considered significant at ≤ 0.05. As HCW increased, boneless loins became darker and redder, evidenced by lower L* (β = -0.0243, < 0.001) and greater a* values (β = 0.0106, < 0.001); however, HCW accounted for only ≤0.80% of the variability in loin L* and a* values. Similarly, subjective loin color score (β = 0.0024, < 0.001) increased with increasing carcass weight, but subjective marbling score was not affected by HCW (β = -0.0022, = 0.06). After 20 d of aging, HCW explained only 0.98% of the variability in loin L* values (β = -0.0287, < 0.01). Heavier carcasses had lower SSF values (β = -0.1269, < 0.001) of LM chops, although HCW explained only 4.46% of the variability in SSF. Although heavier carcasses produced loins that exhibited lower ultimate pH values (β = -0.0018, < 0.001), HCW explained only 1.23% of the variability in ultimate loin pH. Interestingly, cook loss decreased (β = -0.0521, < 0.001) as HCW increased, with HCW accounting for 5.60% of the variability in cook loss. Heavier carcasses resulted in darker, redder ham face color ( < 0.001), but HCW accounted for only ≤2.87% of the variability in ham face L* values and 0.47% of the variability in a* values. Heavier carcasses produced thicker and firmer bellies, with HCW accounting for 37.81% of the variability in belly thickness (β = 0.0272, < 0.001), 20.35% of the variability in subjective flop score (β = 0.0406, < 0.001), and 10.35% of the variability in IV (β = -0.1263, < 0.001). Overall, the proportion of variability in loin and ham quality explained by HCW was poor (≤5.60%), suggesting that HCW is a poor predictor of the primal quality of pigs within this weight range. Nonetheless, HCW was a moderate predictor of belly quality traits. The findings of this study suggest that increasing HCW did not compromise loin, ham, or belly quality attributes.
Clinical governance in the management of induction of labour.
Zuberi, Nadeem Faiyaz; Siddiqui, Salva; Qureshi, Rahat Najam
2003-02-01
To determine whether dissemination of explicit guidelines, developed in consensus with stakeholders, for the processes of induction of labour (IOL), results in reduction of variability in clinical practice. A prospective behaviour modification interventional study. The study was conducted in the department of Obstetrics and Gynaecology at the Aga Khan University, Karachi, between January 1 and August 31, 2002. In a total of 142 conveniently sampled women, undergoing IOL, pre-identified quality assessment indicators were measured. After collection of data from initial 71-women (pre-intervention group) mutually agreed guidelines for clinical practice were disseminated, over a period of time, among consultants, residents and nurses. These indicators were again measured in subsequent 71 women (post-intervention group) to evaluate magnitude of residual non-conformities in these processes. Following behaviour modification interventions, nonconformities in consultants and residents-dependent processes like timely review of patients by consultants (72 vs 1.4%, p value <0.0001), documentation of indication for IOL (66.2 vs 16.9%, p value <0.0001), method of induction for IOL (56.3 vs 28.2%, p value 0.0001), and calculation of Bishop score before IOL (38.0 vs 4.2 %, p value <0.0001) were significantly reduced. Dissemination of explicit guidelines developed in consensus with stakeholders significantly reduces variability in clinical practice. Our model can be used for improving quality of care in other areas of obstetric health care.
Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj
2015-01-01
Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.
Molloy Elreda, Lauren; Coatsworth, J Douglas; Gest, Scott D; Ram, Nilam; Bamberger, Katharine
2016-11-01
Although the majority of evidence-based programs are designed for group delivery, group process and its role in participant outcomes have received little empirical attention. Data were collected from 20 groups of participants (94 early adolescents, 120 parents) enrolled in an efficacy trial of a mindfulness-based adaptation of the Strengthening Families Program (MSFP). Following each weekly session, participants reported on their relations to group members. Social network analysis and methods sensitive to intraindividual variability were integrated to examine weekly covariation between group process and participant progress, and to predict post-intervention outcomes from levels and changes in group process. Results demonstrate hypothesized links between network indices of group process and intervention outcomes and highlight the value of this unique analytic approach to studying intervention group process.
Biala, T A; Jator, S N
2015-01-01
In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.
Wang, Xiaoxue; Li, Xuyong
2017-01-01
Particle grain size is an important indicator for the variability in physical characteristics and pollutants composition of road-deposited sediments (RDS). Quantitative assessment of the grain-size variability in RDS amount, metal concentration, metal load and GSFLoad is essential to elimination of the uncertainty it causes in estimation of RDS emission load and formulation of control strategies. In this study, grain-size variability was explored and quantified using the coefficient of variation (Cv) of the particle size compositions, metal concentrations, metal loads, and GSFLoad values in RDS. Several trends in grain-size variability of RDS were identified: (i) the medium class (105–450 µm) variability in terms of particle size composition, metal loads, and GSFLoad values in RDS was smaller than the fine (<105 µm) and coarse (450–2000 µm) class; (ii) The grain-size variability in terms of metal concentrations increased as the particle size increased, while the metal concentrations decreased; (iii) When compared to the Lorenz coefficient (Lc), the Cv was similarly effective at describing the grain-size variability, whereas it is simpler to calculate because it did not require the data to be pre-processed. The results of this study will facilitate identification of the uncertainty in modelling RDS caused by grain-size class variability. PMID:28788078
Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos
2011-03-01
The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.
Psychometric properties of the Valued Living Questionnaire Adapted to Dementia Caregiving.
Romero-Moreno, R; Gallego-Alberto, L; Márquez-González, M; Losada, A
2017-09-01
Caring for a relative with dementia is associated with physical and emotional health problems in caregivers. There are no studies analysing the role of personal values in the caregiver stress process. This study aims to analyse the psychometric properties of the Valued Living Questionnaire Adapted to Caregiving (VLQAC), and to explore the relationship between personal values and stressors, coping strategies and caregiver distress. A total of 253 individual interviews with caregivers of relatives with dementia were conducted, and the following variables were assessed: personal values, stressors, cognitive fusion, emotional acceptance, depression, anxiety, and satisfaction with life. An exploratory factor analysis and hierarchical regression analyses were carried out. Two factors were obtained, Commitment to Own Values and Commitment to Family Values which explain 43.42% of variance, with reliability coefficients (Cronbach's alpha) of .76 and .61, respectively. Personal values had a significant effect on emotional distress (depression and anxiety) and satisfaction with life, even when controlling for socio-demographic variables, stressors and coping strategies. Results suggest that the personal values construct of dementia caregivers is two-dimensional. The personal values of the caregivers play an important role in accounting for distress and satisfaction with life in this population.
Normalized value coding explains dynamic adaptation in the human valuation process.
Khaw, Mel W; Glimcher, Paul W; Louie, Kenway
2017-11-28
The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.
Variable mass pendulum behaviour processed by wavelet analysis
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Magazù, S.
2017-01-01
The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.
Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2005-01-01
In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.
Effective discharge analysis of ecological processes in streams
Doyle, Martin W.; Stanley, Emily H.; Strayer, David L.; Jacobson, Robert B.; Schmidt, John C.
2005-01-01
Discharge is a master variable that controls many processes in stream ecosystems. However, there is uncertainty of which discharges are most important for driving particular ecological processes and thus how flow regime may influence entire stream ecosystems. Here the analytical method of effective discharge from fluvial geomorphology is used to analyze the interaction between frequency and magnitude of discharge events that drive organic matter transport, algal growth, nutrient retention, macroinvertebrate disturbance, and habitat availability. We quantify the ecological effective discharge using a synthesis of previously published studies and modeling from a range of study sites. An analytical expression is then developed for a particular case of ecological effective discharge and is used to explore how effective discharge varies within variable hydrologic regimes. Our results suggest that a range of discharges is important for different ecological processes in an individual stream. Discharges are not equally important; instead, effective discharge values exist that correspond to near modal flows and moderate floods for the variable sets examined. We suggest four types of ecological response to discharge variability: discharge as a transport mechanism, regulator of habitat, process modulator, and disturbance. Effective discharge analysis will perform well when there is a unique, essentially instantaneous relationship between discharge and an ecological process and poorly when effects of discharge are delayed or confounded by legacy effects. Despite some limitations the conceptual and analytical utility of the effective discharge analysis allows exploring general questions about how hydrologic variability influences various ecological processes in streams.
ERIC Educational Resources Information Center
Bean, John P.
A theoretical model of turnover in work organizations was applied to the college student dropout process at a major midwestern land grant university. The 854 freshmen women subjects completed a questionnaire that included measures for 14 independent variables: grades, practical value, development, routinization, instrumental communication,…
NASA Astrophysics Data System (ADS)
Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Maserati, Marc Peter, Jr.; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia
2017-12-01
There is currently no objective, real-time and non-invasive method for evaluating the quality of mammalian embryos. In this study, we processed images of in vitro produced bovine blastocysts to obtain a deeper comprehension of the embryonic morphological aspects that are related to the standard evaluation of blastocysts. Information was extracted from 482 digital images of blastocysts. The resulting imaging data were individually evaluated by three experienced embryologists who graded their quality. To avoid evaluation bias, each image was related to the modal value of the evaluations. Automated image processing produced 36 quantitative variables for each image. The images, the modal and individual quality grades, and the variables extracted could potentially be used in the development of artificial intelligence techniques (e.g., evolutionary algorithms and artificial neural networks), multivariate modelling and the study of defined structures of the whole blastocyst.
Psychological variables implied in the therapeutic effect of ayahuasca: A contextual approach.
Franquesa, Alba; Sainz-Cort, Alberto; Gandy, Sam; Soler, Joaquim; Alcázar-Córcoles, Miguel Ángel; Bouso, José Carlos
2018-06-01
Ayahuasca is a psychedelic decoction originating from Amazonia. The ayahuasca-induced introspective experience has been shown to have potential benefits in the treatment of several pathologies, to protect mental health and to improve neuropsychological functions and creativity, and boost mindfulness. The underlying psychological processes related to the use of ayahuasca in a psychotherapeutic context are not yet well described in the scientific literature, but there is some evidence to suggest that psychological variables described in psychotherapies could be useful in explaining the therapeutic effects of the brew. In this study we explore the link between ayahuasca use and Decentering, Values and Self, comparing subjects without experience of ayahuasca (n = 41) with subjects with experience (n = 81). Results confirm that ayahuasca users scored higher than non-users in Decentering and Positive self, but not in Valued living, Life fulfillment, Self in social relations, Self in close relations and General self. Scores in Decentering were higher in the more experienced subjects (more than 15 occasions) than in those with less experience (less than 15 occasions). Our results show that psychological process variables may explain the outcomes in ayahuasca psychotherapy. The introduction of these variables is warranted in future ayahuasca therapeutic studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Brooks, Robin; Thorpe, Richard; Wilson, John
2004-11-11
A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
The design of control system of livestock feeding processing
NASA Astrophysics Data System (ADS)
Sihombing, Juna; Napitupulu, Humala L.; Hidayati, Juliza
2018-03-01
PT. XYZ is a company that produces animal feed. One type of animal feed produced is 105 ISA P. In carrying out its production process, PT. XYZ faces the problem of rejected feed amounts during 2014 to June 2015 due to the amount of animal feed that exceeds the standard feed quality of 13% of moisture content and 3% for ash content. Therefore, the researchers analyzed the relationship between factors affecting the quality and extent of damage by using regression and correlation and determine the optimum value of each processing process. Analysis results found that variables affecting product quality are mixing time, steam conditioning temperature and cooling time. The most dominant variable affecting the product moisture content is mixing time with the correlation coefficient of (0.7959) and the most dominant variable affecting the ash content of the product during the processing is mixing time with the correlation coefficient of (0.8541). The design of the proposed product processing control is to run the product processing process with mixing time 235 seconds, steam conditioning temperature 87 0C and cooling time 192 seconds. Product quality 105 ISA P obtained by using this design is with 12.16% moisture content and ash content of 2.59%.
Wang, Shihwe; Kim, Bryan S. K.
2011-01-01
Asian Americans drop out of mental health treatment at a high rate. This problem could be addressed by enhancing therapists’ multicultural competence and by examining clients’ cultural attitudes that may affect the counseling process. In the present study, we used a video analogue design with a sample of 113 Asian American college students to examine these possibilities. The result from a t test showed that the session containing therapist multicultural competencies received higher ratings than the session without therapist multicultural competence. In addition, correlational analyses showed that participant values acculturation was positively associated with participant ratings of counseling process, while the value of emotional self-control was negatively correlated. The results of a hierarchical multiple regression analysis did not support any interaction effects among the independent variables on counseling process. All of these findings could contribute to the field of multicultural competence research and have implications for therapist practices and training. PMID:21490875
Neurons in Dorsal Anterior Cingulate Cortex Signal Postdecisional Variables in a Foraging Task
Hayden, Benjamin Y.
2014-01-01
The dorsal anterior cingulate cortex (dACC) is a key hub of the brain's executive control system. Although a great deal is known about its role in outcome monitoring and behavioral adjustment, whether and how it contributes to the decision process remain unclear. Some theories suggest that dACC neurons track decision variables (e.g., option values) that feed into choice processes and is thus “predecisional.” Other theories suggest that dACC activity patterns differ qualitatively depending on the choice that is made and is thus “postdecisional.” To compare these hypotheses, we examined responses of 124 dACC neurons in a simple foraging task in which monkeys accepted or rejected offers of delayed rewards. In this task, options that vary in benefit (reward size) and cost (delay) appear for 1 s; accepting the option provides the cued reward after the cued delay. To get at dACC neurons' contributions to decisions, we focused on responses around the time of choice, several seconds before the reward and the end of the trial. We found that dACC neurons signal the foregone value of the rejected option, a postdecisional variable. Neurons also signal the profitability (that is, the relative value) of the offer, but even these signals are qualitatively different on accept and reject decisions, meaning that they are also postdecisional. These results suggest that dACC can be placed late in the decision process and also support models that give it a regulatory role in decision, rather than serving as a site of comparison. PMID:24403162
Inter-model variability in hydrological extremes projections for Amazonian sub-basins
NASA Astrophysics Data System (ADS)
Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier
2014-05-01
Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.
The need to consider temporal variability when modelling exchange at the sediment-water interface
Rosenberry, Donald O.
2011-01-01
Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.
In search of a theoretical structure for understanding motivation in schizophrenia.
Medalia, Alice; Brekke, John
2010-09-01
This themed issue considers different ways to conceptualize the motivational impairment that is a core negative symptom of schizophrenia. Motivational impairment has been linked to poor functional outcome, thus it is important to understand the nature and causes of motivational impairment in order to develop better treatment strategies to enhance motivation and engage patients in the process of recovery. Motivation refers to the processes whereby goal-directed activities are instigated and sustained and can be thought of as the product of a complex interaction of physiological processes and social contextual variables. In this issue, the physiological processes of motivation are the focus of Barch and Dowd, who highlight the role of prefrontal and subcortical mesolimbic dopamine systems in incentive-based learning and the difficulties people with schizophrenia have using internal representations of relevant experiences and goals to drive the behavior that should allow them to obtain desired outcomes. The articles in this issue by Choi et al., Nakagami et al., and Silverstein, focus on social contextual or environmental variables that can shape behavior and motivation. Together, these articles highlight the impact of external cues and goal properties on the expectations and values attached to goal outcomes. Expectancy-value and Self-Determination theories provide an overarching framework to accommodate the perspectives and data provided in all these articles. In the following introduction we show how the articles in this themed issue both support the role of expectancies and value in motivation in schizophrenia and elucidate possible deficiencies in the way expectations and value get assigned.
In Search of a Theoretical Structure for Understanding Motivation in Schizophrenia
Medalia, Alice; Brekke, John
2010-01-01
This themed issue considers different ways to conceptualize the motivational impairment that is a core negative symptom of schizophrenia. Motivational impairment has been linked to poor functional outcome, thus it is important to understand the nature and causes of motivational impairment in order to develop better treatment strategies to enhance motivation and engage patients in the process of recovery. Motivation refers to the processes whereby goal-directed activities are instigated and sustained and can be thought of as the product of a complex interaction of physiological processes and social contextual variables. In this issue, the physiological processes of motivation are the focus of Barch and Dowd, who highlight the role of prefrontal and subcortical mesolimbic dopamine systems in incentive-based learning and the difficulties people with schizophrenia have using internal representations of relevant experiences and goals to drive the behavior that should allow them to obtain desired outcomes. The articles in this issue by Choi et al., Nakagami et al., and Silverstein, focus on social contextual or environmental variables that can shape behavior and motivation. Together, these articles highlight the impact of external cues and goal properties on the expectations and values attached to goal outcomes. Expectancy-value and Self-Determination theories provide an overarching framework to accommodate the perspectives and data provided in all these articles. In the following introduction we show how the articles in this themed issue both support the role of expectancies and value in motivation in schizophrenia and elucidate possible deficiencies in the way expectations and value get assigned. PMID:20595203
Code of Federal Regulations, 2012 CFR
2012-04-01
... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...
Code of Federal Regulations, 2010 CFR
2010-04-01
... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...
Code of Federal Regulations, 2014 CFR
2014-04-01
... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...
Code of Federal Regulations, 2013 CFR
2013-04-01
... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...
Code of Federal Regulations, 2011 CFR
2011-04-01
... contracts, including, but not limited to, premium rate structure and premium processing, insurance... discrete cash values that may vary in amount in accordance with the investment experience of the separate...
Stochastic investigation of temperature process for climatic variability identification
NASA Astrophysics Data System (ADS)
Lerias, Eleutherios; Kalamioti, Anna; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris
2016-04-01
The temperature process is considered as the most characteristic hydrometeorological process and has been thoroughly examined in the climate-change framework. We use a dataset comprising hourly temperature and dew point records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges
Wang, Xu; Sun, Baitao
2014-01-01
Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Hydraulics Graphics Package. Users Manual
1985-11-01
ENTER: VARIABLE/SEPARATOR/VALUE OR STRING GLBL, TETON DAM FAILURE ENTER: VARIABLE/SEPARATOR/VALUE OR STRING SLOC ,DISCHARGE HISTOGRAM ENTER: VARIABLE...ENTER: VARIABLE/SEPARATOR/VALUE OR STRING YLBL,FLOW IN 1000 CFS ENTER: VARIABLE/SEPARATORVA LUE OR STRING GLBL, TETON DAM FAILURE ENTER: VARIABLE...SEPARATOR/VALUE OR STRING SECNO, 0 ENTER: VARIABLE/SEPARATOR/VALUE OR STRING GO 1ee0. F go L 0 U I Goo. 200. TETON DAM FAILUPE N\\ rLOIJ Alr 4wi. fiNT. I .I
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
First-Passage-Time Distribution for Variable-Diffusion Processes
NASA Astrophysics Data System (ADS)
Barney, Liberty; Gunaratne, Gemunu H.
2017-05-01
First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.
NASA Astrophysics Data System (ADS)
Dayem, Katherine E.; Molnar, Peter; Battisti, David S.; Roe, Gerard H.
2010-06-01
Variability in oxygen isotope ratios collected from speleothems in Chinese caves is often interpreted as a proxy for variability of precipitation, summer precipitation, seasonality of precipitation, and/or the proportion of 18O to 16O of annual total rainfall that is related to a strengthening or weakening of the East Asian monsoon and, in some cases, to the Indian monsoon. We use modern reanalysis and station data to test whether precipitation and temperature variability over China can be related to changes in climate in these distant locales. We find that annual and rainy season precipitation totals in each of central China, south China, and east India have correlation length scales of ∼ 500 km, shorter than the distance between many speleothem records that share similar long-term time variations in δ18O values. Thus the short distances of correlation do not support, though by themselves cannot refute, the idea that apparently synchronous variations in δ18O values at widely spaced (> 500 km) caves in China are due to variations in annual precipitation amounts. We also evaluate connections between climate variables and δ18O values using available instrumental measurements of δ18O values in precipitation. These data, from stations in the Global Network of Isotopes in Precipitation (GNIP), show that monthly δ18O values generally do not correlate well with either local precipitation amount or local temperature, and the degree to which monthly δ18O values do correlate with them varies from station to station. For the few locations that do show significant correlations between δ18O values and precipitation amount, we estimate the differences in precipitation amount that would be required to account for peak-to-peak differences in δ18O values in the speleothems from Hulu and Dongge caves, assuming that δ18O scales with the monthly amount of precipitation or with seasonal differences in precipitation. Insofar as the present-day relationship between δ18O values and monthly precipitation amounts can be applied to past conditions, differences of at least 50% in mean annual precipitation would be required to explain the δ18O variations on orbital time scales, which are implausibly large and inconsistent with published GCM results. Similarly, plausible amplitudes of seasonal cycles in amounts or in seasonal variations in δ18O values can account for less than half of the 4-5‰ difference between glacial and interglacial δ18O values from speleothems in China. If seasonal cycles in precipitation account for the amplitudes of δ18O values on paleoclimate timescales, they might do so by extending or contracting the durations of seasons (a frequency modulation of the annual cycle), but not by simply varying the amplitudes of the monthly rainfall amounts or monthly average δ18O values (amplitude modulation). Allowing that several processes can affect seasonal variability in isotopic content, we explore the possibility that one or more of the following processes contribute to variations in δ18O values in Chinese cave speleothems: different source regions of the precipitation, which bring different values of δ18O in vapor; different pathways between the moisture source and the paleorecord site along which exchange of 18O between vapor, surface water, and condensate might differ; a different mix of processes involving condensation and evaporation within the atmosphere; or different types of precipitation. Each may account for part of the range of δ18O values revealed by speleothems, and each might contribute to seasonal differences between past and present that do not scale with monthly or even seasonal precipitation amounts.
Boullu, Loïs; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault
2016-01-01
In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a “simple” program that all cells execute identically but results from the dynamical behavior of the underlying molecular network. PMID:28027290
Richard, Angélique; Boullu, Loïs; Herbach, Ulysse; Bonnafoux, Arnaud; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Gunawan, Rudiyanto; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault; Gonin-Giraud, Sandrine; Gandrillon, Olivier
2016-12-01
In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a "simple" program that all cells execute identically but results from the dynamical behavior of the underlying molecular network.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
Spatiotemporal Variability of Hillslope Soil Moisture Across Steep, Highly Dissected Topography
NASA Astrophysics Data System (ADS)
Jarecke, K. M.; Wondzell, S. M.; Bladon, K. D.
2016-12-01
Hillslope ecohydrological processes, including subsurface water flow and plant water uptake, are strongly influenced by soil moisture. However, the factors controlling spatial and temporal variability of soil moisture in steep, mountainous terrain are poorly understood. We asked: How do topography and soils interact to control the spatial and temporal variability of soil moisture in steep, Douglas-fir dominated hillslopes in the western Cascades? We will present a preliminary analysis of bimonthly soil moisture variability from July-November 2016 at 0-30 and 0-60 cm depth across spatially extensive convergent and divergent topographic positions in Watershed 1 of the H.J. Andrews Experimental Forest in central Oregon. Soil moisture monitoring locations were selected following a 5 m LIDAR analysis of topographic position, aspect, and slope. Topographic position index (TPI) was calculated as the difference in elevation to the mean elevation within a 30 m radius. Convergent (negative TPI values) and divergent (positive TPI values) monitoring locations were established along northwest to northeast-facing aspects and within 25-55 degree slopes. We hypothesized that topographic position (convergent vs. divergent), as well as soil physical properties (e.g., texture, bulk density), control variation in hillslope soil moisture at the sub-watershed scale. In addition, we expected the relative importance of hillslope topography to the spatial variability in soil moisture to differ seasonally. By comparing the spatiotemporal variability of hillslope soil moisture across topographic positions, our research provides a foundation for additional understanding of subsurface flow processes and plant-available soil-water in forests with steep, highly dissected terrain.
NASA Astrophysics Data System (ADS)
Cristofanelli, Paolo; Putero, Davide; Bonasoni, Paolo; Busetto, Maurizio; Calzolari, Francescopiero; Camporeale, Giuseppe; Grigioni, Paolo; Lupi, Angelo; Petkov, Boyan; Traversi, Rita; Udisti, Roberto; Vitale, Vito
2018-03-01
This work focuses on the near-surface O3 variability over the eastern Antarctic Plateau. In particular, eight years (2006-2013) of continuous observations at the WMO/GAW contributing station "Concordia" (Dome C-DMC: 75°06‧S, 123°20‧E, 3280 m) are presented, in the framework of the Italian Antarctic Research Programme (PNRA). First, the characterization of seasonal and diurnal O3 variability at DMC is provided. Then, for the period of highest data coverage (2008-2013), we investigated the role of specific atmospheric processes in affecting near-surface summer O3 variability, when O3 enhancement events (OEEs) are systematically observed at DMC (average monthly frequency peaking up to 60% in December). As deduced by a statistical selection methodology, these OEEs are affected by a significant interannual variability, both in their average O3 values and in their frequency. To explain part of this variability, we analyzed OEEs as a function of specific atmospheric variables and processes: (i) total column of O3 (TCO) and UV-A irradiance, (ii) long-range transport of air masses over the Antarctic Plateau (by Lagrangian back-trajectory analysis - LAGRANTO), (iii) occurrence of "deep" stratospheric intrusion events (by using the Lagrangian tool STLEFLUX). The overall near-surface O3 variability at DMC is controlled by a day-to-day pattern, which strongly points towards a dominating influence of processes occurring at "synoptic" scales rather than "local" processes. Even if previous studies suggested an inverse relationship between OEEs and TCO, we found a slight tendency for the annual frequency of OEEs to be higher when TCO values are higher over DMC. The annual occurrence of OEEs at DMC seems related to the total time spent by air masses over the Antarctic plateau before their arrival to DMC, suggesting the accumulation of photochemically-produced O3 during the transport, rather than a more efficient local production. Moreover, the identification of recent (i.e., 4-day old) stratospheric intrusion events by STEFLUX suggested only a minor influence (up to 3% of the period, in November) of "deep" events on the variability of near-surface summer O3 at DMC.
NASA Astrophysics Data System (ADS)
Moll, Andreas; Stegert, Christoph
2007-01-01
This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.
ERIC Educational Resources Information Center
von Kirchenheim, Clement; Richardson, Warnie
2005-01-01
In this study the adjustment process in a designated group of expatriates, (teachers), who have severed ties with their home country and employer is investigated. Based on existing literature, the value of self-efficacy and flexibility on the adjustment process was explored. It was hypothesised that adjustment would result in reduced turnover…
Code of Federal Regulations, 2011 CFR
2011-07-01
... percent reduction in the long-term average daily BOD5 load of the raw (untreated) process wastewater, multiplied by a variability factor of 3.0. (1) The long-term average daily BOD5 load of the raw process... concentration value reflecting a reduction in the long-term average daily COD load in the raw (untreated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... percent reduction in the long-term average daily BOD5 load of the raw (untreated) process wastewater, multiplied by a variability factor of 3.0. (1) The long-term average daily BOD5 load of the raw process... concentration value reflecting a reduction in the long-term average daily COD load in the raw (untreated...
Potential interactions among linguistic, autonomic, and motor factors in speech.
Kleinow, Jennifer; Smith, Anne
2006-05-01
Though anecdotal reports link certain speech disorders to increases in autonomic arousal, few studies have described the relationship between arousal and speech processes. Additionally, it is unclear how increases in arousal may interact with other cognitive-linguistic processes to affect speech motor control. In this experiment we examine potential interactions between autonomic arousal, linguistic processing, and speech motor coordination in adults and children. Autonomic responses (heart rate, finger pulse volume, tonic skin conductance, and phasic skin conductance) were recorded simultaneously with upper and lower lip movements during speech. The lip aperture variability (LA variability index) across multiple repetitions of sentences that varied in length and syntactic complexity was calculated under low- and high-arousal conditions. High arousal conditions were elicited by performance of the Stroop color word task. Children had significantly higher lip aperture variability index values across all speaking tasks, indicating more variable speech motor coordination. Increases in syntactic complexity and utterance length were associated with increases in speech motor coordination variability in both speaker groups. There was a significant effect of Stroop task, which produced increases in autonomic arousal and increased speech motor variability in both adults and children. These results provide novel evidence that high arousal levels can influence speech motor control in both adults and children. (c) 2006 Wiley Periodicals, Inc.
Stochastic investigation of precipitation process for climatic variability identification
NASA Astrophysics Data System (ADS)
Sotiriadou, Alexia; Petsiou, Amalia; Feloni, Elisavet; Kastis, Paris; Iliopoulou, Theano; Markonis, Yannis; Tyralis, Hristos; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris
2016-04-01
The precipitation process is important not only to hydrometeorology but also to renewable energy resources management. We use a dataset consisting of daily and hourly records around the globe to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Wang, Hongguang
2018-01-01
Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450
Kennedy, R R; Merry, A F
2011-09-01
Anaesthesia involves processing large amounts of information over time. One task of the anaesthetist is to detect substantive changes in physiological variables promptly and reliably. It has been previously demonstrated that a graphical trend display of historical data leads to more rapid detection of such changes. We examined the effect of a graphical indication of the magnitude of Trigg's Tracking Variable, a simple statistically based trend detection algorithm, on the accuracy and latency of the detection of changes in a micro-simulation. Ten anaesthetists each viewed 20 simulations with four variables displayed as the current value with a simple graphical trend display. Values for these variables were generated by a computer model, and updated every second; after a period of stability a change occurred to a new random value at least 10 units from baseline. In 50% of the simulations an indication of the rate of change was given by a five level graphical representation of the value of Trigg's Tracking Variable. Participants were asked to indicate when they thought a change was occurring. Changes were detected 10.9% faster with the trend indicator present (mean 13.1 [SD 3.1] cycles vs 14.6 [SD 3.4] cycles, 95% confidence interval 0.4 to 2.5 cycles, P = 0.013. There was no difference in accuracy of detection (median with trend detection 97% [interquartile range 95 to 100%], without trend detection 100% [98 to 100%]), P = 0.8. We conclude that simple statistical trend detection may speed detection of changes during routine anaesthesia, even when a graphical trend display is present.
Statistical evaluation of stability data: criteria for change-over-time and data variability.
Bar, Raphael
2003-01-01
In a recently issued ICH Q1E guidance on evaluation of stability data of drug substances and products, the need to perform a statistical extrapolation of a shelf-life of a drug product or a retest period for a drug substance is based heavily on whether data exhibit a change-over-time and/or variability. However, this document suggests neither measures nor acceptance criteria of these two parameters. This paper demonstrates a useful application of simple statistical parameters for determining whether sets of stability data from either accelerated or long-term storage programs exhibit a change-over-time and/or variability. These parameters are all derived from a simple linear regression analysis first performed on the stability data. The p-value of the slope of the regression line is taken as a measure for change-over-time, and a value of 0.25 is suggested as a limit to insignificant change of the quantitative stability attributes monitored. The minimal process capability index, Cpk, calculated from the standard deviation of the regression line, is suggested as a measure for variability with a value of 2.5 as a limit for an insignificant variability. The usefulness of the above two parameters, p-value and Cpk, was demonstrated on stability data of a refrigerated drug product and on pooled data of three batches of a drug substance. In both cases, the determined parameters allowed characterization of the data in terms of change-over-time and variability. Consequently, complete evaluation of the stability data could be pursued according to the ICH guidance. It is believed that the application of the above two parameters with their acceptance criteria will allow a more unified evaluation of stability data.
Funkenbusch, Paul D; Rotella, Mario; Ercoli, Carlo
2015-04-01
Laboratory studies of tooth preparation are often performed under a limited range of conditions involving single values for all variables other than the 1 being tested. In contrast, in clinical settings not all variables can be tightly controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but in clinical practice, the instrument must make different cuts with individual dentists applying a range of different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies is difficult. The purpose of this study was to examine the effect of 9 process variables on dental cutting in a single experiment, allowing each variable to be robustly tested over a range of values for the other 8 and permitting a direct comparison of the relative importance of each on the cutting process. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures and Macor blocks as the cutting substrate. Analysis of Variance (ANOVA) was used to judge the statistical significance (α=.05). Five variables consistently produced large, statistically significant effects (target applied load, cut length, starting rpm, diamond grit size, and cut type), while 4 variables produced relatively small, statistically insignificant effects (number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate). The control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances as well as hardware choices. These results highlight the importance of local clinical conditions (procedure, dentist) in understanding dental cutting procedures and in designing adequate experimental methodologies for future studies. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Musgrove, M.; Stern, L. A.; Banner, J. L.
2010-06-01
SummaryA two and a half year study of two adjacent watersheds at the Honey Creek State Natural Area (HCSNA) in central Texas was undertaken to evaluate spatial and temporal variations in springwater geochemistry, geochemical evolution processes, and potential effects of brush control on karst watershed hydrology. The watersheds are geologically and geomorphologically similar, and each has springs discharging into Honey Creek, a tributary to the Guadalupe River. Springwater geochemistry is considered in a regional context of aquifer components including soil water, cave dripwater, springwater, and phreatic groundwater. Isotopic and trace element variability allows us to identify both vadose and phreatic groundwater contributions to surface water in Honey Creek. Spatial and temporal geochemical data for six springs reveal systematic differences between the two watersheds. Springwater Sr isotope values lie between values for the limestone bedrock and soils at HCSNA, reflecting a balance between these two primary sources of Sr. Sr isotope values for springs within each watershed are consistent with differences between soil compositions. At some of the springs, consistent temporal variability in springwater geochemistry (Sr isotopes, Mg/Ca, and Sr/Ca values) appears to reflect changes in climatic and hydrologic parameters (rainfall/recharge) that affect watershed processes. Springwater geochemistry was unaffected by brush removal at the scale of the HCSNA study. Results of this study build on previous regional studies to provide insight into watershed hydrology and regional hydrologic processes, including connections between surface water, vadose groundwater, and phreatic groundwater.
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
Women's role in adapting to climate change and variability
NASA Astrophysics Data System (ADS)
Carvajal-Escobar, Y.; Quintero-Angel, M.; García-Vargas, M.
2008-04-01
Given that women are engaged in more climate-related change activities than what is recognized and valued in the community, this article highlights their important role in the adaptation and search for safer communities, which leads them to understand better the causes and consequences of changes in climatic conditions. It is concluded that women have important knowledge and skills for orienting the adaptation processes, a product of their roles in society (productive, reproductive and community); and the importance of gender equity in these processes is recognized. The relationship among climate change, climate variability and the accomplishment of the Millennium Development Goals is considered.
NASA Astrophysics Data System (ADS)
Collins, P. C.; Koduri, S.; Dixit, V.; Fraser, H. L.
2018-03-01
The fracture toughness of a material depends upon the material's composition and microstructure, as well as other material properties operating at the continuum level. The interrelationships between these variables are complex, and thus difficult to interpret, especially in multi-component, multi-phase ductile engineering alloys such as α/β-processed Ti-6Al-4V (nominal composition, wt pct). Neural networks have been used to elucidate how variables such as composition and microstructure influence the fracture toughness directly ( i.e., via a crack initiation or propagation mechanism)—and independent of the influence of the same variables influence on the yield strength and plasticity of the material. The variables included in the models and analysis include (i) alloy composition, specifically, Al, V, O, and Fe; (ii) materials microstructure, including phase fractions and average sizes of key microstructural features; (iii) the yield strength and reduction in area obtained from uniaxial tensile tests; and (iv) an assessment of the degree to which plane strain conditions were satisfied by including a factor related to the plane strain thickness. Once trained, virtual experiments have been conducted which permit the determination of each variable's functional dependency on the resulting fracture toughness. Given that the database includes both K 1 C and K Q values, as well as the in-plane component of the stress state of the crack tip, it is possible to quantitatively assess the effect of sample thickness on K Q and the degree to which the K Q and K 1 C values may vary. These interpretations drawn by comparing multiple neural networks have a significant impact on the general understanding of how the microstructure influences the fracture toughness in ductile materials, as well as an ability to predict the fracture toughness of α/β-processed Ti-6Al-4V.
Hydration level is an internal variable for computing motivation to obtain water rewards in monkeys.
Minamimoto, Takafumi; Yamada, Hiroshi; Hori, Yukiko; Suhara, Tetsuya
2012-05-01
In the process of motivation to engage in a behavior, valuation of the expected outcome is comprised of not only external variables (i.e., incentives) but also internal variables (i.e., drive). However, the exact neural mechanism that integrates these variables for the computation of motivational value remains unclear. Besides, the signal of physiological needs, which serves as the primary internal variable for this computation, remains to be identified. Concerning fluid rewards, the osmolality level, one of the physiological indices for the level of thirst, may be an internal variable for valuation, since an increase in the osmolality level induces drinking behavior. Here, to examine the relationship between osmolality and the motivational value of a water reward, we repeatedly measured the blood osmolality level, while 2 monkeys continuously performed an instrumental task until they spontaneously stopped. We found that, as the total amount of water earned increased, the osmolality level progressively decreased (i.e., the hydration level increased) in an individual-dependent manner. There was a significant negative correlation between the error rate of the task (the proportion of trials with low motivation) and the osmolality level. We also found that the increase in the error rate with reward accumulation can be well explained by a formula describing the changes in the osmolality level. These results provide a biologically supported computational formula for the motivational value of a water reward that depends on the hydration level, enabling us to identify the neural mechanism that integrates internal and external variables.
System properties, feedback control and effector coordination of human temperature regulation.
Werner, Jürgen
2010-05-01
The aim of human temperature regulation is to protect body processes by establishing a relative constancy of deep body temperature (regulated variable), in spite of external and internal influences on it. This is basically achieved by a distributed multi-sensor, multi-processor, multi-effector proportional feedback control system. The paper explains why proportional control implies inherent deviations of the regulated variable from the value in the thermoneutral zone. The concept of feedback of the thermal state of the body, conveniently represented by a high-weighted core temperature (T (c)) and low-weighted peripheral temperatures (T (s)) is equivalent to the control concept of "auxiliary feedback control", using a main (regulated) variable (T (c)), supported by an auxiliary variable (T (s)). This concept implies neither regulation of T (s) nor feedforward control. Steady-states result in the closed control-loop, when the open-loop properties of the (heat transfer) process are compatible with those of the thermoregulatory processors. They are called operating points or balance points and are achieved due to the inherent property of dynamical stability of the thermoregulatory feedback loop. No set-point and no comparison of signals (e.g. actual-set value) are necessary. Metabolic heat production and sweat production, though receiving the same information about the thermal state of the body, are independent effectors with different thresholds and gains. Coordination between one of these effectors and the vasomotor effector is achieved by the fact that changes in the (heat transfer) process evoked by vasomotor control are taken into account by the metabolic/sweat processor.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
NASA Astrophysics Data System (ADS)
Denli, H. H.; Koc, Z.
2015-12-01
Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.
Decision Neuroscience: Neuroeconomics
Smith, David V.; Huettel, Scott A.
2012-01-01
Few aspects of human cognition are more personal than the choices we make. Our decisions – from the mundane to the impossibly complex – continually shape the courses of our lives. In recent years, researchers have applied the tools of neuroscience to understand the mechanisms that underlie decision making, as part of the new discipline of decision neuroscience. A primary goal of this emerging field has been to identify the processes that underlie specific decision variables, including the value of rewards, the uncertainty associated with particular outcomes, and the consequences of social interactions. Recent work suggests potential neural substrates that integrate these variables, potentially reflecting a common neural currency for value, to facilitate value comparisons. Despite the successes of decision neuroscience research for elucidating brain mechanisms, significant challenges remain. These include building new conceptual frameworks for decision making, integrating research findings across disparate techniques and species, and extending results from neuroscience to shape economic theory. To overcome these challenges, future research will likely focus on interpersonal variability in decision making, with the eventual goal of creating biologically plausible models for individual choice. PMID:22754602
Dynamic interactions of atmospheric and hydrological processes result in large spatiotemporal changes of precipitation and wind speed in coastal storm events under both current and future climates. This variability can impact the design and sustainability of water infrastructure ...
ERIC Educational Resources Information Center
GLASER, ROBERT
THIS CHAPTER IN A LARGER WORK ON INDUSTRIAL PSYCHOLOGY DEALS LARGELY WITH THE NEED TO SPECIFY TRAINING OBJECTIVES THROUGH JOB ANALYSIS, USES OF TESTING IN TRAINEE SELECTION, TRAINING VARIABLES AND LEARNING PROCESSES, TRAINING TECHNOLOGY (MAINLY THE CHARACTERISTICS OF PROGRAMED INSTRUCTION), THE EVALUATION OF PROFICIENCY, THE VALUE OF…
Hybrid performance measurement of a business process outsourcing - A Malaysian company perspective
NASA Astrophysics Data System (ADS)
Oluyinka, Oludapo Samson; Tamyez, Puteri Fadzline; Kie, Cheng Jack; Freida, Ayodele Ozavize
2017-05-01
It's no longer new that customer perceived value for product and services are now greatly influenced by its psychological and social advantages. In order to meet up with the increasing operational cost, response time, quality and innovative capabilities many companies turned their fixed operational cost to a variable cost through outsourcing. Hence, the researcher explored different underlying outsourcing theories and infer that these theories are essential to performance improvement. In this study, the researcher evaluates the performance of a business process outsource company by a combination of lean and agile method. To test the hypotheses, we analyze different variability that a business process company faces, how lean and agile have been used in other industry to address such variability and discuss the result using a predictive multiple regression analysis on data collected from companies in Malaysia. The findings from this study revealed that while each method has its own advantage, a business process outsource company could achieve more (up to 87%) increase in performance level by developing a strategy which focuses on a perfect mixture of lean and agile improvement methods. Secondly, this study shows that performance indicator could be better evaluated with non-metrics variables of the agile method. Thirdly, this study also shows that business process outsourcing company could perform better when they concentrate more on strengthening internal process integration of employees.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
Probabilistic and deterministic evaluation of uncertainty in a local scale multi-risk analysis
NASA Astrophysics Data System (ADS)
Lari, S.; Frattini, P.; Crosta, G. B.
2009-04-01
We performed a probabilistic multi-risk analysis (QPRA) at the local scale for a 420 km2 area surrounding the town of Brescia (Northern Italy). We calculated the expected annual loss in terms of economical damage and life loss, for a set of risk scenarios of flood, earthquake and industrial accident with different occurrence probabilities and different intensities. The territorial unit used for the study was the census parcel, of variable area, for which a large amount of data was available. Due to the lack of information related to the evaluation of the hazards, to the value of the exposed elements (e.g., residential and industrial area, population, lifelines, sensitive elements as schools, hospitals) and to the process-specific vulnerability, and to a lack of knowledge of the processes (floods, industrial accidents, earthquakes), we assigned an uncertainty to the input variables of the analysis. For some variables an homogeneous uncertainty was assigned on the whole study area, as for instance for the number of buildings of various typologies, and for the event occurrence probability. In other cases, as for phenomena intensity (e.g.,depth of water during flood) and probability of impact, the uncertainty was defined in relation to the census parcel area. In fact assuming some variables homogeneously diffused or averaged on the census parcels, we introduce a larger error for larger parcels. We propagated the uncertainty in the analysis using three different models, describing the reliability of the output (risk) as a function of the uncertainty of the inputs (scenarios and vulnerability functions). We developed a probabilistic approach based on Monte Carlo simulation, and two deterministic models, namely First Order Second Moment (FOSM) and Point Estimate (PE). In general, similar values of expected losses are obtained with the three models. The uncertainty of the final risk value is in the three cases around the 30% of the expected value. Each of the models, nevertheless, requires different assumptions and computational efforts, and provides results with different level of detail.
Improving health care, Part 1: The clinical value compass.
Nelson, E C; Mohr, J J; Batalden, P B; Plume, S K
1996-04-01
CLINICAL VALUE COMPASS APPROACH: The clinical Value Compass, named to reflect its similarity in layout to a directional compass, has at its four cardinal points (1) functional status, risk status, and well-being; (2) costs; (3) satisfaction with health care and perceived benefit; and (4) clinical outcomes. To manage and improve the value of health care services, providers will need to measure the value of care for similar patient populations, analyze the internal delivery processes, run tests of changed delivery processes, and determine if these changes lead to better outcomes and lower costs. GETTING STARTED--OUTCOMES AND AIM: In the case example, the team's aim is "to find ways to continually improve the quality and value of care for AMI (acute myocardial infection) patients." VALUE MEASURES--SELECT A SET OF OUTCOME AND COST MEASURES: Four to 12 outcome and cost measures are sufficient to get started. In the case example, the team chose 1 or more measures for each quadrant of the value compass. An operational definition is a clearly specified method explaining how to measure a variable. Measures in the case example were based on information from the medical record, administrative and financial records, and patient reports and ratings at eight weeks postdischarge. Measurement systems that quantify the quality of processes and results of care are often add-ons to routine care delivery. However, the process of measurement should be intertwined with the process of care delivery so that front-line providers are involved in both managing the patient and measuring the process and related outcomes and costs.
NASA Astrophysics Data System (ADS)
Anwar, Faizan; Bárdossy, András; Seidel, Jochen
2017-04-01
Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.
Craig, A M; Blythe, L L; Rowe, K E; Lassen, E D; Barrington, R; Walker, K C
1992-12-01
Recent evidence concerning the pathogenesis of equine degenerative myeloencephalopathy indicated that low blood alpha-tocopherol values are a factor in the disease process. Variables that could be introduced by a veterinarian procuring, transporting, or storing samples were evaluated for effects on alpha-tocopherol concentration in equine blood. These variables included temperature; light; exposure to the rubber stopper of the evacuated blood collection tube; hemolysis; duration of freezing time, with and without nitrogen blanketing; and repeated freeze/thaw cycles. It was found that hemolysis caused the greatest change in high-performance liquid chromatography-measured serum alpha-tocopherol values, with mean decrease of 33% (P < 0.001). Lesser, but significant (P < 0.01) changes in serum alpha-tocopherol values were an approximate 10% decrease when refrigerated blood was left in contact with the red rubber stopper of the blood collection tube for 72 hours and an approximate 5% increase when blood was stored at 20 to 25 C (room temperature) for 72 hours. Repeated freeze/thaw cycles resulted in a significant (P < 0.05) 3% decrease in alpha-tocopherol values in heparinized plasma by the third thawing cycle. Freezer storage for a 3-month period without nitrogen blanketing resulted in slight (2%) decrease in mean serum alpha-tocopherol values, whereas values in serum stored for an identical period under nitrogen blanketing did not change. A significant (P < 0.001) mean decrease (10.3%) in alpha-tocopherol values was associated with freezer (-16 C) storage of nitrogen blanketed serum for 6 months.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Bermingham, K. R.; Worsham, E. A.; Walker, R. J.
2018-04-01
When corrected for the effects of cosmic ray exposure, Mo and Ru nucleosynthetic isotope anomalies in iron meteorites from at least nine different parent bodies are strongly correlated in a manner consistent with variable depletion in s-process nucleosynthetic components. In contrast to prior studies, the new results show no significant deviations from a single correlation trend. In the refined Mo-Ru cosmic correlation, a distinction between the non-carbonaceous (NC) group and carbonaceous chondrite (CC) group is evident. Members of the NC group are characterized by isotope compositions reflective of variable s-process depletion. Members of the CC group analyzed here plot in a tight cluster and have the most s-process depleted Mo and Ru isotopic compositions, with Mo isotopes also slightly enriched in r- and possibly p-process contributions. This indicates that the nebular feeding zone of the NC group parent bodies was characterized by Mo and Ru with variable s-process contributions, but with the two elements always mixed in the same proportions. The CC parent bodies sampled here, by contrast, were derived from a nebular feeding zone that had been mixed to a uniform s-process depleted Mo-Ru isotopic composition. Six molybdenite samples, four glacial diamictites, and two ocean island basalts were analyzed to provide a preliminary constraint on the average Mo isotope composition of the bulk silicate Earth (BSE). Combined results yield an average μ97Mo value of +3 ± 6. This value, coupled with a previously reported μ100Ru value of +1 ± 7 for the BSE, indicates that the isotopic composition of the BSE falls precisely on the refined Mo-Ru cosmic correlation. The overlap of the BSE with the correlation implies that there was homogeneous accretion of siderophile elements for the final accretion of 10 to 20 wt% of Earth's mass. The only known cosmochemical materials with an isotopic match to the BSE, with regard to Mo and Ru, are some members of the IAB iron meteorite complex and enstatite chondrites.
Borges, Chad R
2007-07-01
A chemometrics-based data analysis concept has been developed as a substitute for manual inspection of extracted ion chromatograms (XICs), which facilitates rapid, analyst-mediated interpretation of GC- and LC/MS(n) data sets from samples undergoing qualitative batchwise screening for prespecified sets of analytes. Automatic preparation of data into two-dimensional row space-derived scatter plots (row space plots) eliminates the need to manually interpret hundreds to thousands of XICs per batch of samples while keeping all interpretation of raw data directly in the hands of the analyst-saving great quantities of human time without loss of integrity in the data analysis process. For a given analyte, two analyte-specific variables are automatically collected by a computer algorithm and placed into a data matrix (i.e., placed into row space): the first variable is the ion abundance corresponding to scan number x and analyte-specific m/z value y, and the second variable is the ion abundance corresponding to scan number x and analyte-specific m/z value z (a second ion). These two variables serve as the two axes of the aforementioned row space plots. In order to collect appropriate scan number (retention time) information, it is necessary to analyze, as part of every batch, a sample containing a mixture of all analytes to be tested. When pure standard materials of tested analytes are unavailable, but representative ion m/z values are known and retention time can be approximated, data are evaluated based on two-dimensional scores plots from principal component analysis of small time range(s) of mass spectral data. The time-saving efficiency of this concept is directly proportional to the percentage of negative samples and to the total number of samples processed simultaneously.
Development of Pangasius steaks by improved sous-vide technology and its process optimization.
Kumari, Namita; Singh, Chongtham Baru; Kumar, Raushan; Martin Xavier, K A; Lekshmi, Manjusha; Venkateshwarlu, Gudipati; Balange, Amjad K
2016-11-01
The present study embarked on the objective of optimizing improved sous - vide processing condition for development of ready-to-cook Pangasius steaks with extended shelf-life using response surface methodology. For the development of improved sous - vide cooked product, Pangasius steaks were treated with additional hurdles in various combinations for optimization. Based on the study, suitable combination of chitosan and spices was selected which enhanced antimicrobial and oxidative stability of the product. The Box-Behnken experimental design with 15 trials per model was adopted for designing the experiment to know the effect of independent variables, namely chitosan concentration (X 1 ), cooking time (X 2 ) and cooking temperature (X 3 ) on dependent variable i.e. TBARS value (Y 1 ). From RSM generated model, the optimum condition for sous - vide processing of Pangasius steaks were 1.08% chitosan concentration, 70.93 °C of cooking temperature and 16.48 min for cooking time and predicted minimum value of multiple response optimal condition was Y = 0.855 mg MDA/Kg of fish. The high correlation coefficient (R 2 = 0.975) between the model and the experimental data showed that the model was able to efficiently predict processing condition for development of sous - vide processed Pangasius steaks. This research may help the processing industries and Pangasius fish farmer as it provides an alternative low cost technology for the proper utilization of Pangasius .
Effect of thermal decarbonation on the stable isotope composition of carbonates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durakiewicz, T.; Sharp, Z. D.; Papike, J. J.
2001-01-01
The unusual texture and stable isotope variability of carbonates in AH84001 have been used as evidence for early life on Mars (Romanek et al., 1994; McKay et al., 1996). Oxygen and carbon isotope variability is most commonly attributed to low-temperature processes, including Rayleigh-like fractionation associated with biological activity. Another possible explanation for the isotopic variability in meteoritic samples is thermal decarbonation. In this report, different carbonates were heated in a He-stream until decomposition temperatures were reached. The oxygen and carbon isotope ratios ({delta}{sup 18}O and {delta}{sup 13}C values) of the resulting gas were measured on a continuous flow isotope ratiomore » mass spectrometer. The aim of this work is to evaluate the possibility that large isotopic variations can be generated on a small scale abiogenically, by the process of thermal decarbonation. Oxygen isotope fractionations of >4{per_thousand} have been measured during decarbonation of calcite at high temperatures (McCrea, 1950), and in excess of 6{per_thousand} for dolomite decarbonated between 500 and 600 C (Sharma and Clayton, 1965). Isotopic fractionations of this magnitude, coupled with Rayleigh-like distillation behavior could result in very large isotopic variations on a small scale. To test the idea, calcite, dolomite and siderite were heated in a quartz tube in a He-stream in excess of 1 atmosphere. Simultaneous determinations of {delta}{sup 13}C and {Delta}{sup 18}O values were obtained on 250 {micro}l aliquots of the CO{sub 2}-bearing He gas using an automated 6-way switching valve system (Finnigan MAT GasBench II) and a Finnigan MAT Delta Plus mass spectrometer. It was found that decarbonation of calcite in a He atmosphere begins at 720 C, but the rate significantly increases at temperatures of 820 C. After an initial light {delta}{sup 18}O value of -14.1{per_thousand} at 720 C associated with very early decarbonation, {delta}{sup 18}0 values increase to a constant -11.8{per_thousand}, close to the accepted value of -12.09{per_thousand} (PDB). After 10 minutes at 820 C, the {delta}{sup 18}O values and signal strength both begin to decrease linearly to a {delta}{sup 18}O value of -14.75 and very low amounts of CO{sub 2} (Fig. 1). In contrast, the {delta}{sup 13}C values are extremely constant (0.12 {+-} 0.25{per_thousand}) for all measurements, in very good agreement with accepted values of 0.33{per_thousand} (PDB). There is much less isotopic variability during dolomite decarbonation. CO{sub 2} is first detected at 600 C. The signal strength increases by an order of magnitude between 670 and 700 C and again at 760 C. Both {delta}{sup 13}C and {delta}{sup 18}O values are nearly constant over the entire temperature range and sample size. For oxygen, the measured {delta}{sup 18}O values averaged -20.9 {+-} 0.7{per_thousand} (n = 30). Including only samples over 700 C, the average is -21.2 {+-} 0.2{per_thousand} compared to the accepted value of -21{per_thousand}. Carbon is similarly constant. The average {delta}{sup 13}C value is -2.50{per_thousand} compared to the accepted value of -2.62{per_thousand}. Far more variability is seen during the decomposition of siderite. Two samples were analyzed. In both samples, the initial {delta}{sup 18}O values were far lower than expected.« less
Phytoplankton primary production in the world's estuarine-coastal ecosystems
Cloern, James E.; Foster, S.Q.; Kleckner, A.E.
2014-01-01
Estuaries are biogeochemical hot spots because they receive large inputs of nutrients and organic carbon from land and oceans to support high rates of metabolism and primary production. We synthesize published rates of annual phytoplankton primary production (APPP) in marine ecosystems influenced by connectivity to land – estuaries, bays, lagoons, fjords and inland seas. Review of the scientific literature produced a compilation of 1148 values of APPP derived from monthly incubation assays to measure carbon assimilation or oxygen production. The median value of median APPP measurements in 131 ecosystems is 185 and the mean is 252 g C m−2 yr−1, but the range is large: from −105 (net pelagic production in the Scheldt Estuary) to 1890 g C m−2 yr−1 (net phytoplankton production in Tamagawa Estuary). APPP varies up to 10-fold within ecosystems and 5-fold from year to year (but we only found eight APPP series longer than a decade so our knowledge of decadal-scale variability is limited). We use studies of individual places to build a conceptual model that integrates the mechanisms generating this large variability: nutrient supply, light limitation by turbidity, grazing by consumers, and physical processes (river inflow, ocean exchange, and inputs of heat, light and wind energy). We consider method as another source of variability because the compilation includes values derived from widely differing protocols. A simulation model shows that different methods reported in the literature can yield up to 3-fold variability depending on incubation protocols and methods for integrating measured rates over time and depth. Although attempts have been made to upscale measures of estuarine-coastal APPP, the empirical record is inadequate for yielding reliable global estimates. The record is deficient in three ways. First, it is highly biased by the large number of measurements made in northern Europe (particularly the Baltic region) and North America. Of the 1148 reported values of APPP, 958 come from sites between 30 and 60° N; we found only 36 for sites south of 20° N. Second, of the 131 ecosystems where APPP has been reported, 37% are based on measurements at only one location during 1 year. The accuracy of these values is unknown but probably low, given the large interannual and spatial variability within ecosystems. Finally, global assessments are confounded by measurements that are not intercomparable because they were made with different methods. Phytoplankton primary production along the continental margins is tightly linked to variability of water quality, biogeochemical processes including ocean–atmosphere CO2 exchange, and production at higher trophic levels including species we harvest as food. The empirical record has deficiencies that preclude reliable global assessment of this key Earth system process. We face two grand challenges to resolve these deficiencies: (1) organize and fund an international effort to use a common method and measure APPP regularly across a network of coastal sites that are globally representative and sustained over time, and (2) integrate data into a unifying model to explain the wide range of variability across ecosystems and to project responses of APPP to regional manifestations of global change as it continues to unfold.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
NASA Astrophysics Data System (ADS)
Maheran, A. H. Afifah; Menon, P. S.; Ahmad, I.; Shaari, S.; Faizah, Z. A. Noor
2015-04-01
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (ILEAK) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO2) and tungsten silicide (WSix). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum ILEAK where the maximum predicted ILEAK value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device's leakage current. The absolute process parameters combination results in ILEAK mean value of 3.96821 nA/µm where is far lower than the predicted value.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheran, A. H. Afifah; Menon, P. S.; Shaari, S.
2015-04-24
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (I{sub LEAK}) onmore » PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO{sub 2}) and tungsten silicide (WSi{sub x}). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum I{sub LEAK} where the maximum predicted I{sub LEAK} value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device’s leakage current. The absolute process parameters combination results in I{sub LEAK} mean value of 3.96821 nA/µm where is far lower than the predicted value.« less
Theoretical and experimental researches on the operating costs of a wastewater treatment plant
NASA Astrophysics Data System (ADS)
Panaitescu, M.; Panaitescu, F.-V.; Anton, I.-A.
2015-11-01
Purpose of the work: The total cost of a sewage plants is often determined by the present value method. All of the annual operating costs for each process are converted to the value of today's correspondence and added to the costs of investment for each process, which leads to getting the current net value. The operating costs of the sewage plants are subdivided, in general, in the premises of the investment and operating costs. The latter can be stable (normal operation and maintenance, the establishment of power) or variables (chemical and power sludge treatment and disposal, of effluent charges). For the purpose of evaluating the preliminary costs so that an installation can choose between different alternatives in an incipient phase of a project, can be used cost functions. In this paper will be calculated the operational cost to make several scenarios in order to optimize its. Total operational cost (fixed and variable) is dependent global parameters of wastewater treatment plant. Research and methodology: The wastewater treatment plant costs are subdivided in investment and operating costs. We can use different cost functions to estimate fixed and variable operating costs. In this study we have used the statistical formulas for cost functions. The method which was applied to study the impact of the influent characteristics on the costs is economic analysis. Optimization of plant design consist in firstly, to assess the ability of the smallest design to treat the maximum loading rates to a given effluent quality and, secondly, to compare the cost of the two alternatives for average and maximum loading rates. Results: In this paper we obtained the statistical values for the investment cost functions, operational fixed costs and operational variable costs for wastewater treatment plant and its graphical representations. All costs were compared to the net values. Finally we observe that it is more economical to build a larger plant, especially if maximum loading rates are reached. The actual target of operational management is to directly implement the presented cost functions in a software tool, in which the design of a plant and the simulation of its behaviour are evaluated simultaneously.
NASA Astrophysics Data System (ADS)
Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran
2018-03-01
This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).
Asadzadeh, Farrokh; Maleki-Kaklar, Mahdi; Soiltanalinejad, Nooshin; Shabani, Farzin
2018-02-08
Citric acid (CA) was evaluated in terms of its efficiency as a biodegradable chelating agent, in removing zinc (Zn) from heavily contaminated soil, using a soil washing process. To determine preliminary ranges of variables in the washing process, single factor experiments were carried out with different CA concentrations, pH levels and washing times. Optimization of batch washing conditions followed using a response surface methodology (RSM) based central composite design (CCD) approach. CCD predicted values and experimental results showed strong agreement, with an R 2 value of 0.966. Maximum removal of 92.8% occurred with a CA concentration of 167.6 mM, pH of 4.43, and washing time of 30 min as optimal variable values. A leaching column experiment followed, to examine the efficiency of the optimum conditions established by the CCD model. A comparison of two soil washing techniques indicated that the removal efficiency rate of the column experiment (85.8%) closely matching that of the batch experiment (92.8%). The methodology supporting the research experimentation for optimizing Zn removal may be useful in the design of protocols for practical engineering soil decontamination applications.
Total quality management in orthodontic practice.
Atta, A E
1999-12-01
Quality is the buzz word for the new Millennium. Patients demand it, and we must serve it. Yet one must identify it. Quality is not imaging or public relations; it is a business process. This short article presents quality as a balance of three critical notions: core clinical competence, perceived values that our patients seek and want, and the cost of quality. Customer satisfaction is a variable that must be identified for each practice. In my practice, patients perceive quality as communication and time, be it treatment or waiting time. Time is a value and cost that must be managed effectively. Total quality management is a business function; it involves diagnosis, design, implementation, and measurement of the process, the people, and the service. Kazien is a function that reduces value services, eliminates waste, and manages time and cost in the process. Total quality management is a total commitment for continuous improvement.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
NASA Astrophysics Data System (ADS)
Saragih, Jepronel; Salim Sitompul, Opim; Situmorang, Zakaria
2017-12-01
One of the techniques known in Data Mining namely clustering. Image segmentation process does not always represent the actual image which is caused by a combination of algorithms as long as it has not been able to obtain optimal cluster centers. In this research will search for the smallest error with the counting result of a Fuzzy C Means process optimized with Cat swam Algorithm Optimization that has been developed by adding the weight of the energy in the process of Tracing Mode.So with the parameter can be determined the most optimal cluster centers and most closely with the data will be made the cluster. Weigh inertia in this research, namely: (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), (0.7), (0.8) and (0.9). Then compare the results of each variable values inersia (W) which is different and taken the smallest results. Of this weighting analysis process can acquire the right produce inertia variable cost function the smallest.
Readiness of Teachers for Change in Schools
ERIC Educational Resources Information Center
Kondakci, Yasar; Beycioglu, Kadir; Sincar, Mehmet; Ugurlu, Celal Teyyar
2017-01-01
Theorizing on the role of teacher attitudes in change effectiveness, this study examined the predictive value of context (trust), process (social interaction, participative management and knowledge sharing) and outcome (job satisfaction and workload perception) variables for cognitive, emotional and intentional readiness of teachers for change.…
Statistical Discourse Analysis: A Method for Modelling Online Discussion Processes
ERIC Educational Resources Information Center
Chiu, Ming Ming; Fujita, Nobuko
2014-01-01
Online forums (synchronous and asynchronous) offer exciting data opportunities to analyze how people influence one another through their interactions. However, researchers must address several analytic difficulties involving the data (missing values, nested structure [messages within topics], non-sequential messages), outcome variables (discrete…
Optimization of Shea (Vitellaria paradoxa) butter quality using screw expeller extraction.
Gezahegn, Yonas A; Emire, Shimelis A; Asfaw, Sisay F
2016-11-01
The quality of Shea butter is highly affected by processing factors. Hence, the aim of this work was to evaluate the effects of conditioning duration (CD), moisture content (MC), and die temperature (DT) of screw expeller on Shea butter quality. A combination of 3 3 full factorial design and response surface methodology was used for this investigation. Response variables were refractive index, acid value, and peroxide value. The model enabled to identify the optimum operating settings (CD = 28-30 min, MC = 3-5 g/100 g, and DT = 65-70°C) for maximize refractive index and minimum acid value. For minimum peroxide value 0 min CD, 10 g/100 g MC, and 30°C were discovered. In all-over optimization, optimal values of 30 min CD, 9.7 g/100 g MC, and 70°C DT were found. Hence, the processing factors must be at their optimal values to achieve high butter quality and consistence.
Gilerson, Alexander; Carrizo, Carlos; Foster, Robert; Harmel, Tristan
2018-04-16
The value and spectral dependence of the reflectance coefficient (ρ) of skylight from wind-roughened ocean surfaces is critical for determining accurate water leaving radiance and remote sensing reflectances from shipborne, AERONET-Ocean Color and satellite observations. Using a vector radiative transfer code, spectra of the reflectance coefficient and corresponding radiances near the ocean surface and at the top of the atmosphere (TOA) are simulated for a broad range of parameters including flat and windy ocean surfaces with wind speeds up to 15 m/s, aerosol optical thicknesses of 0-1 at 440nm, wavelengths of 400-900 nm, and variable Sun and viewing zenith angles. Results revealed a profound impact of the aerosol load and type on the spectral values of ρ. Such impacts, not included yet in standard processing, may produce significant inaccuracies in the reflectance spectra retrieved from above-water radiometry and satellite observations. Implications for satellite cal/val activities as well as potential changes in measurement and data processing schemes are discussed.
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Dietary protein intakes and risk of ulcerative colitis.
Rashvand, Samaneh; Somi, Mohammad Hossein; Rashidkhani, Bahram; Hekmatdoost, Azita
2015-01-01
The incidence of ulcerative colitis (UC) is rising in populations with western-style diet, rich in fat and protein, and low in fruits and vegetables. In the present study, we aimed to evaluate the association between dietary protein intakes and the risk of developing incident UC. Sixty two cases of UC and 124 controls were studied using country-specific food frequency questionnaire (FFQ). Group comparisons by each factor were done using χ2 test, and significance level was set at α= 0.05. Logistic regression adjusted for potential confounding variables was carried out. Univariate analysis suggested positive associations between processed meat, red meat and organ meat with risk of ulcerative colitis. Comparing highest versus lowest categories of consumption, multivariate conditional logistic regression analysis accounting for potential confounding variables indicated that patients who consumed a higher amount of processed meat were at a higher risk for developing UC (P value for trend= 0.02). Similarly, patients who consumed higher amounts of red meat were at a higher risk for UC (P value for trend= 0.01). The highest tertile of intake of organ meat was associated with an increased risk of ulcerative colitis with a statistically significant trend across tertiles (P value for trend= 0.01) when adjusted. In this case-control study we observed that higher consumptions of processed meat, red meat and organ meat were associated with increased risk for UC.
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Submesoscale Sea Surface Temperature Variability from UAV and Satellite Measurements
NASA Astrophysics Data System (ADS)
Castro, S. L.; Emery, W. J.; Tandy, W., Jr.; Good, W. S.
2017-12-01
Technological advances in spatial resolution of observations have revealed the importance of short-lived ocean processes with scales of O(1km). These submesoscale processes play an important role for the transfer of energy from the meso- to small scales and for generating significant spatial and temporal intermittency in the upper ocean, critical for the mixing of the oceanic boundary layer. Submesoscales have been observed in sea surface temperatures (SST) from satellites. Satellite SST measurements are spatial averages over the footprint of the satellite. When the variance of the SST distribution within the footprint is small, the average value is representative of the SST over the whole pixel. If the variance is large, the spatial heterogeneity is a source of uncertainty in satellite derived SSTs. Here we show evidence that the submesoscale variability in SSTs at spatial scales of 1km is responsible for the spatial variability within satellite footprints. Previous studies of the spatial variability in SST, using ship-based radiometric data suggested that variability at scales smaller than 1 km is significant and affects the uncertainty of satellite-derived skin SSTs. We examine data collected by a calibrated thermal infrared radiometer, the Ball Experimental Sea Surface Temperature (BESST), flown on a UAV over the Arctic Ocean and compare them with coincident measurements from the MODIS spaceborne radiometer to assess the spatial variability of SST within 1 km pixels. By taking the standard deviation of all the BESST measurements within individual MODIS pixels we show that significant spatial variability exists within the footprints. The distribution of the surface variability measured by BESST shows a peak value of O(0.1K) with 95% of the pixels showing σ < 0.45K. More importantly, high-variability pixels are located at density fronts in the marginal ice zone, which are a primary source of submesoscale intermittency near the surface in the Arctic Ocean. Wavenumber spectra of the BESST SSTs indicate a spectral slope of -2, consistent with the presence of submesoscale processes. Furthermore, not only is the BESST wavenumber spectra able to match the MODIS SST spectra well, but also extends the spectral slope of -2 by 2 decades relative to MODIS, from wavelengths of 8km to 0.08km.
NASA Astrophysics Data System (ADS)
Routray, Sunita; Swain, Ranjita; Rao, Raghupatruni Bhima
2017-04-01
The present study is aimed at investigating the optimization of a mineral separator for processing of beach sand minerals of Bay of Bengal along Ganjam-Rushikulya coast. The central composite design matrix and response surface methodology were applied in designing the experiments to evaluate the interactive effects of the three most important operating variables, such as feed quantity, wash water rate and Shake amplitude of the deck. The predicted values were found to be in good agreement with the experimental values (R2 = 0.97 for grade and 0.98 for recovery). To understand the impact of each variable, three dimensional (3D) plots were also developed for the estimated responses.
Value Preferences of Social Workers.
Tartakovsky, Eugene; Walsh, Sophie D
2018-04-01
The current study examines value preferences of social workers in Israel. Using a theoretical framework of person-environment fit paradigm and theory of values, the study compared social workers (N = 641, mean age = 37.7 years, 91 percent female) with a representative sample of Israeli Jews (N = 1,600, mean age = 44.2, 52 percent female). Questionnaires included personal value preferences and sociodemographic variables (gender, age, education, religiosity, and immigrant status). Multivariate analysis of covariance showed that value preferences of social workers differed significantly from those of the general population. Analyses of covariance showed that social workers reported a higher preference for self-transcendence and a lower preference for conservation and self-enhancement values. Results have significance for the selection, training, and supervision of social workers. They suggest that it is important to assess to what extent selection processes for social workers are primarily recruiting social workers with shared values, thus creating an overly homogenous population of social workers. An understanding of personal value motivations can help social workers in their own process of self-development and growth, and to understand how the profession can fulfill their basic motivations.
Genetic, environmental, and epigenetic factors in the development of personality disturbance.
Depue, Richard A
2009-01-01
A dimensional model of personality disturbance is presented that is defined by extreme values on interacting subsets of seven major personality traits. Being at the extreme has marked effects on the threshold for eliciting those traits under stimulus conditions: that is, the extent to which the environment affects the neurobiological functioning underlying the traits. To explore the nature of development of extreme values on these traits, each trait is discussed in terms of three major issues: (a) the neurobiological variables associated with the trait, (b) individual variation in this neurobiology as a function of genetic polymorphisms, and (c) the effects of environmental adversity on these neurobiological variables through the action of epigenetic processes. It is noted that gene-environment interaction appears to be dependent on two main factors: (a) both genetic and environmental variables appear to have the most profound and enduring effects when they exert their effects during early postnatal periods, times when the forebrain is undergoing exuberant experience-expectant dendritic and axonal growth; and (b) environmental effects on neurobiology are strongly modified by individual differences in "traitlike" functioning of neurobiological variables. A model of the nature of the interaction between environmental and neurobiological variables in the development of personality disturbance is presented.
Sources of biomass feedstock variability and the potential impact on biofuels production
Williams, C. Luke; Westover, Tyler L.; Emerson, Rachel M.; ...
2015-11-23
In this study, terrestrial lignocellulosic biomass has the potential to be a carbon neutral and domestic source of fuels and chemicals. However, the innate variability of biomass resources, such as herbaceous and woody materials, and the inconsistency within a single resource due to disparate growth and harvesting conditions, presents challenges for downstream processes which often require materials that are physically and chemically consistent. Intrinsic biomass characteristics, including moisture content, carbohydrate and ash compositions, bulk density, and particle size/shape distributions are highly variable and can impact the economics of transforming biomass into value-added products. For instance, ash content increases by anmore » order of magnitude between woody and herbaceous feedstocks (from ~0.5 to 5 %, respectively) while lignin content drops by a factor of two (from ~30 to 15 %, respectively). This increase in ash and reduction in lignin leads to biofuel conversion consequences, such as reduced pyrolysis oil yields for herbaceous products as compared to woody material. In this review, the sources of variability for key biomass characteristics are presented for multiple types of biomass. Additionally, this review investigates the major impacts of the variability in biomass composition on four conversion processes: fermentation, hydrothermal liquefaction, pyrolysis, and direct combustion. Finally, future research processes aimed at reducing the detrimental impacts of biomass variability on conversion to fuels and chemicals are proposed.« less
Schoellhamer, D.H.
2002-01-01
Singular spectrum analysis for time series with missing data (SSAM) was used to reconstruct components of a 6-yr time series of suspended-sediment concentration (SSC) from San Francisco Bay. Data were collected every 15 min and the time series contained missing values that primarily were due to sensor fouling. SSAM was applied in a sequential manner to calculate reconstructed components with time scales of variability that ranged from tidal to annual. Physical processes that controlled SSC and their contribution to the total variance of SSC were (1) diurnal, semidiurnal, and other higher frequency tidal constituents (24%), (2) semimonthly tidal cycles (21%), (3) monthly tidal cycles (19%), (4) semiannual tidal cycles (12%), and (5) annual pulses of sediment caused by freshwater inflow, deposition, and subsequent wind-wave resuspension (13%). Of the total variance 89% was explained and subtidal variability (65%) was greater than tidal variability (24%). Processes at subtidal time scales accounted for more variance of SSC than processes at tidal time scales because sediment accumulated in the water column and the supply of easily erodible bed sediment increased during periods of increased subtidal energy. This large range of time scales that each contained significant variability of SSC and associated contaminants can confound design of sampling programs and interpretation of resulting data.
Goode, C; LeRoy, J; Allen, D G
2007-01-01
This study reports on a multivariate analysis of the moving bed biofilm reactor (MBBR) wastewater treatment system at a Canadian pulp mill. The modelling approach involved a data overview by principal component analysis (PCA) followed by partial least squares (PLS) modelling with the objective of explaining and predicting changes in the BOD output of the reactor. Over two years of data with 87 process measurements were used to build the models. Variables were collected from the MBBR control scheme as well as upstream in the bleach plant and in digestion. To account for process dynamics, a variable lagging approach was used for variables with significant temporal correlations. It was found that wood type pulped at the mill was a significant variable governing reactor performance. Other important variables included flow parameters, faults in the temperature or pH control of the reactor, and some potential indirect indicators of biomass activity (residual nitrogen and pH out). The most predictive model was found to have an RMSEP value of 606 kgBOD/d, representing a 14.5% average error. This was a good fit, given the measurement error of the BOD test. Overall, the statistical approach was effective in describing and predicting MBBR treatment performance.
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Cognitive Performance and Heart Rate Variability: The Influence of Fitness Level
Luque-Casado, Antonio; Zabala, Mikel; Morales, Esther; Mateo-March, Manuel; Sanabria, Daniel
2013-01-01
In the present study, we investigated the relation between cognitive performance and heart rate variability as a function of fitness level. We measured the effect of three cognitive tasks (the psychomotor vigilance task, a temporal orienting task, and a duration discrimination task) on the heart rate variability of two groups of participants: a high-fit group and a low-fit group. Two major novel findings emerged from this study. First, the lowest values of heart rate variability were found during performance of the duration discrimination task, compared to the other two tasks. Second, the results showed a decrement in heart rate variability as a function of the time on task, although only in the low-fit group. Moreover, the high-fit group showed overall faster reaction times than the low-fit group in the psychomotor vigilance task, while there were not significant differences in performance between the two groups of participants in the other two cognitive tasks. In sum, our results highlighted the influence of cognitive processing on heart rate variability. Importantly, both behavioral and physiological results suggested that the main benefit obtained as a result of fitness level appeared to be associated with processes involving sustained attention. PMID:23437276
Cultural and Cognitive Considerations in the Prevention of American Indian Adolescent Suicide.
ERIC Educational Resources Information Center
La Framboise, Teresa D.; Big Foot, Delores Subia
1988-01-01
Describes cultural considerations associated with American Indian adolescents coping within a transactional, cognitive-phenomenological framework. Discusses select cultural values and beliefs of American Indians associated with death in terms of person variables and situational demand characteristics that interplay in coping process. Suggests…
NASA Astrophysics Data System (ADS)
Matyasovszky, István; Makra, László; Csépe, Zoltán; Deák, Áron József; Pál-Molnár, Elemér; Fülöp, Andrea; Tusnády, Gábor
2015-09-01
The paper examines the sensitivity of daily airborne Ambrosia (ragweed) pollen levels of a current pollen season not only on daily values of meteorological variables during this season but also on the past meteorological conditions. The results obtained from a 19-year data set including daily ragweed pollen counts and ten daily meteorological variables are evaluated with special focus on the interactions between the phyto-physiological processes and the meteorological elements. Instead of a Pearson correlation measuring the strength of the linear relationship between two random variables, a generalised correlation that measures every kind of relationship between random vectors was used. These latter correlations between arrays of daily values of the ten meteorological elements and the array of daily ragweed pollen concentrations during the current pollen season were calculated. For the current pollen season, the six most important variables are two temperature variables (mean and minimum temperatures), two humidity variables (dew point depression and rainfall) and two variables characterising the mixing of the air (wind speed and the height of the planetary boundary layer). The six most important meteorological variables before the current pollen season contain four temperature variables (mean, maximum, minimum temperatures and soil temperature) and two variables that characterise large-scale weather patterns (sea level pressure and the height of the planetary boundary layer). Key periods of the past meteorological variables before the current pollen season have been identified. The importance of this kind of analysis is that a knowledge of the past meteorological conditions may contribute to a better prediction of the upcoming pollen season.
Matyasovszky, István; Makra, László; Csépe, Zoltán; Deák, Áron József; Pál-Molnár, Elemér; Fülöp, Andrea; Tusnády, Gábor
2015-09-01
The paper examines the sensitivity of daily airborne Ambrosia (ragweed) pollen levels of a current pollen season not only on daily values of meteorological variables during this season but also on the past meteorological conditions. The results obtained from a 19-year data set including daily ragweed pollen counts and ten daily meteorological variables are evaluated with special focus on the interactions between the phyto-physiological processes and the meteorological elements. Instead of a Pearson correlation measuring the strength of the linear relationship between two random variables, a generalised correlation that measures every kind of relationship between random vectors was used. These latter correlations between arrays of daily values of the ten meteorological elements and the array of daily ragweed pollen concentrations during the current pollen season were calculated. For the current pollen season, the six most important variables are two temperature variables (mean and minimum temperatures), two humidity variables (dew point depression and rainfall) and two variables characterising the mixing of the air (wind speed and the height of the planetary boundary layer). The six most important meteorological variables before the current pollen season contain four temperature variables (mean, maximum, minimum temperatures and soil temperature) and two variables that characterise large-scale weather patterns (sea level pressure and the height of the planetary boundary layer). Key periods of the past meteorological variables before the current pollen season have been identified. The importance of this kind of analysis is that a knowledge of the past meteorological conditions may contribute to a better prediction of the upcoming pollen season.
Neurons in the Frontal Lobe Encode the Value of Multiple Decision Variables
Kennerley, Steven W.; Dahmubed, Aspandiar F.; Lara, Antonio H.; Wallis, Jonathan D.
2009-01-01
A central question in behavioral science is how we select among choice alternatives to obtain consistently the most beneficial outcomes. Three variables are particularly important when making a decision: the potential payoff, the probability of success, and the cost in terms of time and effort. A key brain region in decision making is the frontal cortex as damage here impairs the ability to make optimal choices across a range of decision types. We simultaneously recorded the activity of multiple single neurons in the frontal cortex while subjects made choices involving the three aforementioned decision variables. This enabled us to contrast the relative contribution of the anterior cingulate cortex (ACC), the orbito-frontal cortex, and the lateral prefrontal cortex to the decision-making process. Neurons in all three areas encoded value relating to choices involving probability, payoff, or cost manipulations. However, the most significant signals were in the ACC, where neurons encoded multiplexed representations of the three different decision variables. This supports the notion that the ACC is an important component of the neural circuitry underlying optimal decision making. PMID:18752411
Indian Monsoon Rainfall Variability During the Common Era: Implications on the Ancient Civilization
NASA Astrophysics Data System (ADS)
Pothuri, D.
2017-12-01
Indian monsoon rainfall variability was reconstructed during last two millennia by using the δ18Ow from a sediment core in the Krishna-Godavari Basin. Higher δ18Ow values during Dark Age Cold Period (DACP) (1550 to 1250 years BP) and Little Ice Age (LIA) (700 to 200 years BP) represent less Indian monsoon rainfall. Whereas during Medieval Warm Period (MWP) (1200 to 800 years BP) and major portion of Roman Warm Period (RWP) 2000 to 1550 years BP) document more rainfall in the Indian subcontinent as evident from lower δ18Ow values. A significant correlation exist between the Bay of Bengal (BoB) sea surface temperature (SST) and Indian monsoon proxy (i.e. δ18Ow), which suggests that; (i) the forcing mechanism of the Indian monsoon rainfall variability during last two millennia was controlled by the thermal contrast between the Indian Ocean and Asian Land Mass, and (ii) the evaporation processes in the BoB and associated SST are strongly coupled with the Indian Monsoon variability over the last two millennia.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
Defense Waste Processing Facility Nitric- Glycolic Flowsheet Chemical Process Cell Chemistry: Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamecnik, J.; Edwards, T.
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by Savannah River National Laboratory (SRNL) from 2011 to 2016. The goal of this work was to develop empirical correlation models to predict these values from measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge or simulant composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) statemore » of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the work on these correlations based on the aforementioned data. Previous work on these correlations was documented in a technical report covering data from 2011-2015. This current report supersedes this previous report. Further refinement of the models as additional data are collected is recommended.« less
Fuzzy simulation in concurrent engineering
NASA Technical Reports Server (NTRS)
Kraslawski, A.; Nystrom, L.
1992-01-01
Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.
On two diffusion neuronal models with multiplicative noise: The mean first-passage time properties
NASA Astrophysics Data System (ADS)
D'Onofrio, G.; Lansky, P.; Pirozzi, E.
2018-04-01
Two diffusion processes with multiplicative noise, able to model the changes in the neuronal membrane depolarization between two consecutive spikes of a single neuron, are considered and compared. The processes have the same deterministic part but different stochastic components. The differences in the state-dependent variabilities, their asymptotic distributions, and the properties of the first-passage time across a constant threshold are investigated. Closed form expressions for the mean of the first-passage time of both processes are derived and applied to determine the role played by the parameters involved in the model. It is shown that for some values of the input parameters, the higher variability, given by the second moment, does not imply shorter mean first-passage time. The reason for that can be found in the complete shape of the stationary distribution of the two processes. Applications outside neuroscience are also mentioned.
Water isotope variability across single rainfall events in the tropical Pacific
NASA Astrophysics Data System (ADS)
Cobb, K. M.; Moerman, J. W.; Ellis, S. A.; Bennett, L.; Bosma, C.; Hitt, N. T.
2017-12-01
Water isotopologues provide a powerful diagnostic tool for probing the dynamical processes involved in the initiation and evolution of tropical convective events, yet water isotope observations rarely meet the temporal resolution required to resolve such processes. Here we present timeseries of rainfall oxygen and hydrogen isotopologues across over 30 individual convective events sampled at 1- to 5-minute intervals at both terrestrial (Gunung Mulu National Park, 4N, 115W) and maritime (Kiritimati Island, 2N, 157W) sites located in the equatorial Pacific. The sites are the loci of significant paleoclimate research that employ water isotopologues to reconstruct a variety of climatic parameters of interest over the last century, in the case of coral d18O, to hundreds of thousands of years before present, in the case of stalagmite d18O. As such, there is significant scientific value in refining our understanding of water isotope controls at these particular sites. Our results illustrate large, short-term excursions in water isotope values that far exceed the signals recovered in daily timeseries of rainfall isotopologues from the sites, illustrating the fundamental contribution of mesoscale processes in driving rainfall isotope variability. That said, the cross-event profiles exhibit a broad range of trajectories, even for events collected at the same time of day on adjoining days. Profiles collected at different phases of the 2015-2017 strong El Nino-Southern Oscillation cycle also exhibit appreciable variability. We compare our observations to hypothetical profiles from a 1-dimensional model of each rainfall event, as well as to output from 4-dimensional isotope-equipped, ocean-atmosphere coupled models of rainfall isotope variability in the tropical Pacific. We discuss the implications of our findings for the interpretation of water isotope-based reconstructions of hydroclimate in the tropics.
2018-01-01
Objective To study the performance of multifocal-visual-evoked-potential (mfVEP) signals filtered using empirical mode decomposition (EMD) in discriminating, based on amplitude, between control and multiple sclerosis (MS) patient groups, and to reduce variability in interocular latency in control subjects. Methods MfVEP signals were obtained from controls, clinically definitive MS and MS-risk progression patients (radiologically isolated syndrome (RIS) and clinically isolated syndrome (CIS)). The conventional method of processing mfVEPs consists of using a 1–35 Hz bandpass frequency filter (XDFT). The EMD algorithm was used to decompose the XDFT signals into several intrinsic mode functions (IMFs). This signal processing was assessed by computing the amplitudes and latencies of the XDFT and IMF signals (XEMD). The amplitudes from the full visual field and from ring 5 (9.8–15° eccentricity) were studied. The discrimination index was calculated between controls and patients. Interocular latency values were computed from the XDFT and XEMD signals in a control database to study variability. Results Using the amplitude of the mfVEP signals filtered with EMD (XEMD) obtains higher discrimination index values than the conventional method when control, MS-risk progression (RIS and CIS) and MS subjects are studied. The lowest variability in interocular latency computations from the control patient database was obtained by comparing the XEMD signals with the XDFT signals. Even better results (amplitude discrimination and latency variability) were obtained in ring 5 (9.8–15° eccentricity of the visual field). Conclusions Filtering mfVEP signals using the EMD algorithm will result in better identification of subjects at risk of developing MS and better accuracy in latency studies. This could be applied to assess visual cortex activity in MS diagnosis and evolution studies. PMID:29677200
NASA Astrophysics Data System (ADS)
Harding, J. W.; Small, J. W.; James, D. A.
2007-12-01
Recent analysis of elite-level half-pipe snowboard competition has revealed a number of sport specific key performance variables (KPV's) that correlate well to score. Information on these variables is difficult to acquire and analyse, relying on collection and labour intensive manual post processing of video data. This paper presents the use of inertial sensors as a user-friendly alternative and subsequently implements signal processing routines to ultimately provide automated, sport specific feedback to coaches and athletes. The author has recently shown that the key performance variables (KPV's) of total air-time (TAT) and average degree of rotation (ADR) achieved during elite half-pipe snowboarding competition show strong correlation with an athlete's subjectively judged score. Utilising Micro-Electrochemical System (MEMS) sensors (tri-axial accelerometers) this paper demonstrates that air-time (AT) achieved during half-pipe snowboarding can be detected and calculated accurately using basic signal processing techniques. Characterisation of the variations in aerial acrobatic manoeuvres and the associated calculation of exact degree of rotation (DR) achieved is a likely extension of this research. The technique developed used a two-pass method to detect locations of half-pipe snowboard runs using power density in the frequency domain and subsequently utilises a threshold based search algorithm in the time domain to calculate air-times associated with individual aerial acrobatic manoeuvres. This technique correctly identified the air-times of 100 percent of aerial acrobatic manoeuvres within each half-pipe snowboarding run (n = 92 aerial acrobatic manoeuvres from 4 subjects) and displayed a very strong correlation with a video based reference standard for air-time calculation (r = 0.78 +/- 0.08; p value < 0.0001; SEE = 0.08 ×/÷ 1.16; mean bias = -0.03 +/- 0.02s) (value +/- or ×/÷ 95% CL).
NASA Astrophysics Data System (ADS)
Susanty, W.; Helwani, Z.; Zulfansyah
2018-04-01
Oil palm frond can be used as alternative energy source by torrefaction process. Torrefaction is a treatment process of biomass into solid fuel by heating within temperature range of 200-300°C in an inert environment. This research aims to result solid fuel through torrefaction and to study the effect of process variable interaction. Torrefaction of oil palm frond was using fixed bed horizontal reactor with operation condition of temperature (225-275 °C), time (15-45 minutes) and nitrogen flow rate (50-150 ml/min). Responses resulted were calorific value and proximate (moisture, ash, volatile matter and fixed carbon). Analysis result was processed by using Design Expert v7.0.0. Result obtained for calorific value was 17.700-19.600 kJ/kg and for the proximate were moisture range of 3-4%; ash range of 1.5-4%; volatile matter of 45-55% and fixed carbon of 37-46%. The most affecting factor signficantly towards the responses was temperature then followed by time and nitrogen flow rate.
Navaee-Ardeh, S; Mohammadi-Rovshandeh, J; Pourjoozi, M
2004-03-01
A normalized design was used to examine the influence of independent variables (alcohol concentration, cooking time and temperature) in the catalytic soda-ethanol pulping of rice straw on various mechanical properties (breaking length, burst, tear index and folding endurance) of paper sheets obtained from each pulping process. An equation of each dependent variable as a function of cooking variables (independent variables) was obtained by multiple non-linear regression using the least square method by MATLAB software for developing of empirical models. The ranges of alcohol concentration, cooking time and temperature were 40-65% (w/w), 150-180 min and 195-210 degrees C, respectively. Three-dimensional graphs of dependent variables were also plotted versus independent variables. The optimum values of breaking length, burst and tear index and folding endurance were 4683.7 (m), 30.99 (kN/g), 376.93 (mN m2/g) and 27.31, respectively. However, short cooking time (150 min), high ethanol concentration (65%) and high temperature (210 degrees C) could be used to produce papers with suitable burst and tear index. However, for papers with best breaking length and folding endurance low temperature (195 degrees C) was desirable. Differences between optimum values of dependent variables obtained by normalized design and experimental data were less than 20%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen V.
This document outlines a statistical framework for establishing a shelf-life program for components whose performance is measured by the value of a continuous variable such as voltage or function time. The approach applies to both single measurement devices and repeated measurement devices, although additional process control charts may be useful in the case of repeated measurements. The approach is to choose a sample size that protects the margin associated with a particular variable over the life of the component. Deviations from expected performance of the measured variable are detected prior to the complete loss of margin. This ensures the reliabilitymore » of the component over its lifetime.« less
Automatic identification of variables in epidemiological datasets using logic regression.
Lorenz, Matthias W; Abdi, Negin Ashtiani; Scheckenbach, Frank; Pflug, Anja; Bülbül, Alpaslan; Catapano, Alberico L; Agewall, Stefan; Ezhov, Marat; Bots, Michiel L; Kiechl, Stefan; Orth, Andreas
2017-04-13
For an individual participant data (IPD) meta-analysis, multiple datasets must be transformed in a consistent format, e.g. using uniform variable names. When large numbers of datasets have to be processed, this can be a time-consuming and error-prone task. Automated or semi-automated identification of variables can help to reduce the workload and improve the data quality. For semi-automation high sensitivity in the recognition of matching variables is particularly important, because it allows creating software which for a target variable presents a choice of source variables, from which a user can choose the matching one, with only low risk of having missed a correct source variable. For each variable in a set of target variables, a number of simple rules were manually created. With logic regression, an optimal Boolean combination of these rules was searched for every target variable, using a random subset of a large database of epidemiological and clinical cohort data (construction subset). In a second subset of this database (validation subset), this optimal combination rules were validated. In the construction sample, 41 target variables were allocated on average with a positive predictive value (PPV) of 34%, and a negative predictive value (NPV) of 95%. In the validation sample, PPV was 33%, whereas NPV remained at 94%. In the construction sample, PPV was 50% or less in 63% of all variables, in the validation sample in 71% of all variables. We demonstrated that the application of logic regression in a complex data management task in large epidemiological IPD meta-analyses is feasible. However, the performance of the algorithm is poor, which may require backup strategies.
Verma, Nupur; Hippe, Daniel S; Robinson, Jeffrey D
2016-12-01
Peer review is an important and necessary part of radiology. There are several options to perform the peer review process. This study examines the reproducibility of peer review by comparing two scoring systems. American Board of Radiology-certified radiologists from various practice environments and subspecialties were recruited to score deidentified examinations on a web-based PACS with two scoring systems, RADPEER and Cleareview. Quantitative analysis of the scores was performed for interrater agreement. Interobserver variability was high for both the RADPEER and Cleareview scoring systems. The interobserver correlations (kappa values) were 0.17-0.23 for RADPEER and 0.10-0.16 for Cleareview. Interrater correlation was not statistically significantly different when comparing the RADPEER and Cleareview systems (p = 0.07-0.27). The kappa values were low for the Cleareview subscores when we evaluated for missed findings (0.26), satisfaction of search (0.17), and inadequate interpretation of findings (0.12). Our study confirms the previous report of low interobserver correlation when using the peer review process. There was low interobserver agreement seen when using both the RADPEER and the Cleareview scoring systems.
Breaking the trade-off between efficiency and service.
Frei, Frances X
2006-11-01
For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.
NASA Astrophysics Data System (ADS)
Musabbikhah, Saptoadi, H.; Subarmono, Wibisono, M. A.
2016-03-01
Fossil fuel still dominates the needs of energy in Indonesia for the past few years. The increasing scarcity of oil and gas from non-renewable materials results in an energy crisis. This condition turns to be a serious problem for society which demands immediate solution. One effort which can be taken to overcome this problem is the utilization and processing of biomass as renewable energy by means of carbonization. Thus, it can be used as qualified raw material for production of briquette. In this research, coconut shell is used as carbonized waste. The research aims at improving the quality of coconut shell as the material for making briquettes as cheap and eco-friendly renewable energy. At the end, it is expected to decrease dependence on oil and gas. The research variables are drying temperature and time, carbonization time and temperature. The dependent variable is calorific value of the coconut shell. The method used in this research is Taguchi Method. The result of the research shows thus variables, have a significant contribution on the increase of coconut shell's calorific value. It is proven that the higher thus variables are higher calorific value. Before carbonization, the average calorific value of coconut shell reaches 4,667 call/g, and a significant increase is notable after the carbonization. The optimization is parameter setting of A2B3C3D3, which means that the drying temperature is 105 °C, the drying time is 24 hours, the carbonization temperature is 650 °C and carbonization time is 120 minutes. The average calorific value is approximately 7,744 cal/g. Therefore, the increase of the coconut shell's calorific value after the carbonization is 3,077 cal/g or approximately 60 %. The charcoal of carbonized coconut shell has met the requirement of SNI, thus it can be used as raw material in making briquette which can eventually be used as cheap and environmental friendly fuel.
Gutiérrez, M C; Martín, M A; Serrano, A; Chica, A F
2015-03-15
In this study, the evolution of odour concentration (ouE/m(3)STP) emitted during the pile composting of the organic fraction of municipal solid waste (OFMSW) was monitored by dynamic olfactometry. Physical-chemical variables as well as the respirometric variables were also analysed. The aim of this work was twofold. The first was to determine the relationship between odour and traditional variables to determine if dynamic olfactometry is a feasible and adequate technique for monitoring an aerobic stabilisation process (composting). Second, the composting process odour impact on surrounding areas was simulated by a dispersion model. The results showed that the decrease of odour concentration, total organic carbon and respirometric variables was similar (around 96, 96 y 98% respectively). The highest odour emission (5224 ouE/m(3)) was reached in parallel with the highest microbiological activity (SOUR and OD20 values of 25 mgO2/gVS · h and 70 mgO2/gVS, respectively). The validity of monitoring odour emissions during composting in combination with traditional and respirometric variables was demonstrated by the adequate correlation obtained between the variables. Moreover, the quantification of odour emissions by dynamic olfactometry and the subsequent application of the dispersion model permitted making an initial prediction of the impact of odorous emissions on the population. Finally, the determination of CO2 and CH4 emissions allowed the influence of composting process on carbon reservoirs and global warming to be evaluated. Copyright © 2014 Elsevier Ltd. All rights reserved.
Erva, Rajeswara Reddy; Goswami, Ajgebi Nath; Suman, Priyanka; Vedanabhatla, Ravali; Rajulapati, Satish Babu
2017-03-16
The culture conditions and nutritional rations influencing the production of extra cellular antileukemic enzyme by novel Enterobacter aerogenes KCTC2190/MTCC111 were optimized in shake-flask culture. Process variables like pH, temperature, incubation time, carbon and nitrogen sources, inducer concentration, and inoculum size were taken into account. In the present study, finest enzyme activity achieved by traditional one variable at a time method was 7.6 IU/mL which was a 2.6-fold increase compared to the initial value. Further, the L-asparaginase production was optimized using response surface methodology, and validated experimental result at optimized process variables gave 18.35 IU/mL of L-asparaginase activity, which is 2.4-times higher than the traditional optimization approach. The study explored the E. aerogenes MTCC111 as a potent and potential bacterial source for high yield of antileukemic drug.
Zoccolotti, Pierluigi; De Luca, Maria; Di Filippo, Gloria; Marinelli, Chiara Valeria; Spinelli, Donatella
2018-06-01
We reanalyzed previous experiments based on lexical-decision and reading-aloud tasks in children with dyslexia and control children and tested the prediction of the difference engine model (DEM) that mean condition reaction times (RTs) and standard deviations (SDs) would be linearly related (Myerson et al., 2003). Then we evaluated the slope and the intercept with the x-axis of these linear functions in comparison with previously reported values (i.e., slope of about 0.30 and intercept of about 300 ms). In the case of lexical decision, the parameters were close to these values; by contrast, in the case of reading aloud, a much steeper slope (0.66) and a greater intercept (482.6 ms) were found. Therefore, interindividual variability grows at a much faster rate as a function of condition difficulty for reading than for lexical-decision tasks (or for other tasks reported in the literature). According to the DEM, the slope of the regression that relates means and SDs indicates the degree of correlation among the durations of the stages of processing. We propose that the need for a close coupling between orthographic and phonological processing in reading is what drives the particularly strong relationship between performance and interindividual variability that we observed in reading tasks.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Cultures-of-Use and Morphologies of Communicative Action
ERIC Educational Resources Information Center
Thorne, Steven L.
2016-01-01
In this article I revisit the cultures-of-use conceptual framework--that technologies, as forms and processes comprising human culture, mediate and assume variable meanings, values, and conventionalized functions for different communities (Thorne, 2003). I trace the antecedent arc of investigation and serendipitous encounters that led to the 2003…
ERIC Educational Resources Information Center
Rachlin, Howard
2006-01-01
In general, if a variable can be expressed as a function of its own maximum value, that function may be called a discount function. Delay discounting and probability discounting are commonly studied in psychology, but memory, matching, and economic utility also may be viewed as discounting processes. When they are so viewed, the discount function…
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Which Measures of Online Control Are Least Sensitive to Offline Processes?
de Grosbois, John; Tremblay, Luc
2018-02-28
A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.
Manenti, Diego R; Módenes, Aparecido N; Soares, Petrick A; Boaventura, Rui A R; Palácio, Soraya M; Borba, Fernando H; Espinoza-Quiñones, Fernando R; Bergamasco, Rosângela; Vilar, Vítor J P
2015-01-01
In this work, the application of an iron electrode-based electrocoagulation (EC) process on the treatment of a real textile wastewater (RTW) was investigated. In order to perform an efficient integration of the EC process with a biological oxidation one, an enhancement in the biodegradability and low toxicity of final compounds was sought. Optimal values of EC reactor operation parameters (pH, current density and electrolysis time) were achieved by applying a full factorial 3(3) experimental design. Biodegradability and toxicity assays were performed on treated RTW samples obtained at the optimal values of: pH of the solution (7.0), current density (142.9 A m(-2)) and different electrolysis times. As response variables for the biodegradability and toxicity assessment, the Zahn-Wellens test (Dt), the ratio values of dissolved organic carbon (DOC) relative to low-molecular-weight carboxylates anions (LMCA) and lethal concentration 50 (LC50) were used. According to the Dt, the DOC/LMCA ratio and LC50, an electrolysis time of 15 min along with the optimal values of pH and current density were suggested as suitable for a next stage of treatment based on a biological oxidation process.
NASA Astrophysics Data System (ADS)
Zhu, Yun-Mei; Lu, X. X.; Zhou, Yue
2007-02-01
Artificial neural network (ANN) was used to model the monthly suspended sediment flux in the Longchuanjiang River, the Upper Yangtze Catchment, China. The suspended sediment flux was related to the average rainfall, temperature, rainfall intensity and water discharge. It is demonstrated that ANN is capable of modeling the monthly suspended sediment flux with fairly good accuracy when proper variables and their lag effect on the suspended sediment flux are used as inputs. Compared with multiple linear regression and power relation models, ANN can generate a better fit under the same data requirement. In addition, ANN can provide more reasonable predictions for extremely high or low values, because of the distributed information processing system and the nonlinear transformation involved. Compared with the ANNs that use the values of the dependent variable at previous time steps as inputs, the ANNs established in this research with only climate variables have an advantage because it can be used to assess hydrological responses to climate change.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Mathematical model of rod oscillations with account of material relaxation behaviour
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kudinov, V. A.; Eremin, A. V.; Zhukov, V. V.
2018-03-01
Taking into account the bounded velocity of strains and deformations propagation in the formula given in the Hooke’s law, the authors have obtained the differential equation of rod damped oscillations that includes the first and the third time derivatives of displacement as well as the mixed derivative (with respect to space and time variables). Study of its precise analytical solution found by means of separation of variables has shown that rod recovery after being disturbed is accompanied by low-amplitude damped oscillations that occur at the start time and only within the range of positive displacement values. The oscillations amplitude decreases with increase of relaxation factor. Rod is recovered virtually without an oscillating process both in the limit and with any high values of the relaxation factor.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
2017-04-01
ER D C/ CH L TR -1 7- 5 Coastal Field Data Collection Program Collection, Processing, and Accuracy of Mobile Terrestrial Lidar Survey ... Survey Data in the Coastal Environment Nicholas J. Spore and Katherine L. Brodie Field Research Facility U.S. Army Engineer Research and Development...value to a mobile lidar survey may misrepresent some of the spatially variable error throughout the survey , and further work should incorporate full
2017-04-01
ER D C/ CH L TR -1 7- 5 Coastal Field Data Collection Program Collection, Processing, and Accuracy of Mobile Terrestrial Lidar Survey ... Survey Data in the Coastal Environment Nicholas J. Spore and Katherine L. Brodie Field Research Facility U.S. Army Engineer Research and Development...value to a mobile lidar survey may misrepresent some of the spatially variable error throughout the survey , and further work should incorporate full
Crew Interface Analysis: Selected Articles on Space Human Factors Research, 1987 - 1991
1993-07-01
recognitions to that distractor ) suggest that the perceptual type of the graph has a strong representation in memory . We found that both training with... processing strategy. If my goal were to compare the value of variables or (possibly) to compare a trend, I would select a perceptual strategy. If...be needed to determine specific processing models for different questions using the perceptual strategy. In addition, predictions about the memory
Surface laser marking optimization using an experimental design approach
NASA Astrophysics Data System (ADS)
Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.
2017-04-01
Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.
NASA Astrophysics Data System (ADS)
Olson, E. J.; Dodd, J. P.
2015-12-01
Previous studies have documented that tree ring oxygen and hydrogen isotopes primarily reflect source water; however, biosynthetic fractionation processes modify this signal and can have a varied response to environmental conditions. The degree to which source water contributes to δ2H and δ18O values of plant α-cellulose is species-specific and modern calibration studies are necessary. Here we present a calibration data set of P. tamarugo α-cellulose δ2H and δ18O values from the Atacama Desert in Northern Chile. P. tamarugo trees are endemic to the region and have adapted to the extremely arid environment where average annual precipitation is < 5mm/yr. This modern isotope chronology has been constructed from living P. tamarugo trees (n=12) from the Pampa del Tamarugal Basin in the northern Atacama. Generally, the tree-ring α-cellulose δ18O values are poorly correlated with meteorological data from coastal stations (i.e. Iquique); however, there is good agreement between regional groundwater depth and α-cellulose δ18O values. Most notably, average α-cellulose δ18O values increase by >2 ‰ over the past 20 years associated with a ~1.1 m lowering of the local groundwater table throughout the area. The correlation between a-cellulose isotope values and hydrologic conditions in modern times provides a baseline for interpretation of tree-ring isotope chronologies from the past 9.5 kya. A high-resolution Holocene (1.8-9.1 kya) age record of Prosopis sp. tree ring α-cellulose δ18O values provides a proxy for climatic and hydrologic conditions. During the early Holocene δ18O values range from 31 to 35‰ (2σ=0.58‰), while during the late Holocene values are much more variable (27.4 to 41‰; 2σ=2.64‰). Anthropogenic demand on local water sources is the most significant environmental factor affecting the variation in modern α-cellulose δ18O values; however, climate induced changes in regional water availability are the dominant driver of variability in the paleo-record. Increased variability in α-cellulose δ18O values in the late Holocene most likely indicates a reduction in annual recharge and an increase in episodic flood events driven by ENSO and other modes of atmospheric variability.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Heart rate variability changes in business process outsourcing employees working in shifts.
Kunikullaya, Kirthana U; Kirthi, Suresh K; Venkatesh, D; Goturu, Jaisri
2010-10-31
Irregular and poor quality sleep is common in business process outsourcing (BPO) employees due to continuous shift working. The influence of this on the cardiac autonomic activity was investigated by the spectral analysis of heart rate variability (HRV). 36 night shift BPO employees (working from 22:00 to 06:00h) and 36 age and sex matched day shift BPO employees (working from 08:00 to 16:00h) were recruited for the study. Five minute electrocardiogram (ECG) was recorded in all the subjects. Heart rate variability was analyzed by fast Fourier transformation using RMS Vagus HRV software. The results were analyzed using Mann Whitney U test, Student t-test, Wilcoxon signed rank test and were expressed as mean ± SD. Sleepiness was significantly higher among night shift workers as measured by Epworth Sleepiness Scale (p<0.001). Night shift BPO employees were found to have a trend towards lower values of vagal parameters - HF power (ms(2)), and higher values of sympathovagal parameters like LF Power (ms(2)) and the LF/HF power (%) suggesting decreased vagal activity and sympathetic over activity, when compared to day shift employees. However, HRV parameters did not vary significantly between the day shift employees and night shift workers baseline values, and also within the night shift group. Night shift working increased the heart rate and shifted the sympathovagal balance towards sympathetic dominance and decreased vagal parameters of HRV. This is an indicator of unfavorable change in the myocardial system, and thus shows increased risk of cardiovascular disease among the night shift employees.
Byrd, Darrin; Christopfel, Rebecca; Arabasz, Grae; Catana, Ciprian; Karp, Joel; Lodge, Martin A; Laymon, Charles; Moros, Eduardo G; Budzevich, Mikalai; Nehmeh, Sadek; Scheuermann, Joshua; Sunderland, John; Zhang, Jun; Kinahan, Paul
2018-01-01
Positron emission tomography (PET) is a quantitative imaging modality, but the computation of standardized uptake values (SUVs) requires several instruments to be correctly calibrated. Variability in the calibration process may lead to unreliable quantitation. Sealed source kits containing traceable amounts of [Formula: see text] were used to measure signal stability for 19 PET scanners at nine hospitals in the National Cancer Institute's Quantitative Imaging Network. Repeated measurements of the sources were performed on PET scanners and in dose calibrators. The measured scanner and dose calibrator signal biases were used to compute the bias in SUVs at multiple time points for each site over a 14-month period. Estimation of absolute SUV accuracy was confounded by bias from the solid phantoms' physical properties. On average, the intrascanner coefficient of variation for SUV measurements was 3.5%. Over the entire length of the study, single-scanner SUV values varied over a range of 11%. Dose calibrator bias was not correlated with scanner bias. Calibration factors from the image metadata were nearly as variable as scanner signal, and were correlated with signal for many scanners. SUVs often showed low intrascanner variability between successive measurements but were also prone to shifts in apparent bias, possibly in part due to scanner recalibrations that are part of regular scanner quality control. Biases of key factors in the computation of SUVs were not correlated and their temporal variations did not cancel out of the computation. Long-lived sources and image metadata may provide a check on the recalibration process.
NASA Astrophysics Data System (ADS)
Min, Jae-Ho; Lee, Gyeo-Re; Lee, Jin-Kwan; Moon, Sang Heup; Kim, Chang-Koo
2004-05-01
The dependences of etch rates on the angle of ions incident on the substrate surface in four plasma/substrate systems that constitute the advanced Bosch process were investigated using a Faraday cage designed for the accurate control of the ion-incident angle. The four systems, established by combining discharge gases and substrates, were a SF6/poly-Si, a SF6/fluorocarbon polymer, an O2/fluorocarbon polymer, and a C4F8/Si. In the case of SF6/poly-Si, the normalized etch rates (NERs), defined as the etch rates normalized by the rate on the horizontal surface, were higher at all angles than values predicted from the cosine of the ion-incident angle. This characteristic curve shape was independent of changes in process variables including the source power and bias voltage. Contrary to the earlier case, the NERs for the O2/polymer decreased and eventually reached much lower values than the cosine values at angles between 30° and 70° when the source power was increased and the bias voltage was decreased. On the other hand, the NERs for the SF6/polymer showed a weak dependence on the process variables. In the case of C4F8/Si, which is used in the Bosch process for depositing a fluorocarbon layer on the substrate surface, the deposition rate varied with the ion incident angle, showing an S-shaped curve. These characteristic deposition rate curves, which were highly dependent on the process conditions, could be divided into four distinct regions: a Si sputtering region, an ion-suppressed polymer deposition region, an ion-enhanced polymer deposition region, and an ion-free polymer deposition region. Based on the earlier characteristic angular dependences of the etch (or deposition) rates in the individual systems, ideal process conditions for obtaining an anisotropic etch profile in the advanced Bosch process are proposed. .
Linear solvation energy relationships: "rule of thumb" for estimation of variable values
Hickey, James P.; Passino-Reader, Dora R.
1991-01-01
For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.
Newgard, Craig D.; Zive, Dana; Jui, Jonathan; Weathers, Cody; Daya, Mohamud
2011-01-01
Objectives To compare case ascertainment, agreement, validity, and missing values for clinical research data obtained, processed, and linked electronically from electronic health records (EHR), compared to “manual” data processing and record abstraction in a cohort of out-ofhospital trauma patients. Methods This was a secondary analysis of two sets of data collected for a prospective, population-based, out-of-hospital trauma cohort evaluated by 10 emergency medical services (EMS) agencies transporting to 16 hospitals, from January 1, 2006 through October 2, 2007. Eighteen clinical, operational, procedural, and outcome variables were collected and processed separately and independently using two parallel data processing strategies, by personnel blinded to patients in the other group. The electronic approach included electronic health record data exports from EMS agencies, reformatting and probabilistic linkage to outcomes from local trauma registries and state discharge databases. The manual data processing approach included chart matching, data abstraction, and data entry by a trained abstractor. Descriptive statistics, measures of agreement, and validity were used to compare the two approaches to data processing. Results During the 21-month period, 418 patients underwent both data processing methods and formed the primary cohort. Agreement was good to excellent (kappa 0.76 to 0.97; intraclass correlation coefficient 0.49 to 0.97), with exact agreement in 67% to 99% of cases, and a median difference of zero for all continuous and ordinal variables. The proportions of missing out-of-hospital values were similar between the two approaches, although electronic processing generated more missing outcomes (87 out of 418, 21%, 95% CI = 17% to 25%) than the manual approach (11 out of 418, 3%, 95% CI = 1% to 5%). Case ascertainment of eligible injured patients was greater using electronic methods (n = 3,008) compared to manual methods (n = 629). Conclusions In this sample of out-of-hospital trauma patients, an all-electronic data processing strategy identified more patients and generated values with good agreement and validity compared to traditional data collection and processing methods. PMID:22320373
Perturbing engine performance measurements to determine optimal engine control settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less
Natural language processing to ascertain two key variables from operative reports in ophthalmology.
Liu, Liyan; Shorstein, Neal H; Amsden, Laura B; Herrinton, Lisa J
2017-04-01
Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743 838 operative notes recorded for 315 246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), which functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10 000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g. "antibiotic" linked with "injection"). We confirmed the NLP tools by iteratively obtaining random samples of 2000 (0.3%) notes, with replacement. The NLP tools identified approximately 60 000 intracameral antibiotic injections and 3500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Natural Language Processing to Ascertain Two Key Variables from Operative Reports in Ophthalmology
Liu, Liyan; Shorstein, Neal H.; Amsden, Laura B; Herrinton, Lisa J.
2016-01-01
Purpose Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743,838 operative notes recorded for 315,246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), that functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. Methods For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10,000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g., “antibiotic” linked with “injection”). We confirmed the NLP tools by iteratively obtaining random samples of 2,000 (0.3%) notes, with replacement. Results The NLP tools identified approximately 60,000 intracameral antibiotic injections and 3,500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. Conclusion NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. PMID:28052483
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Jiménez, L; Angulo, V; Caparrós, S; Ariza, J
2007-12-01
The influence of operational variables in the pulping of vine shoots by use of ethanolamine [viz. temperature (155-185 degrees C), cooking time (30-90min) and ethanolamine concentration (50-70% v/v)] on the properties of the resulting pulp (viz. yield, kappa index, viscosity and drainability) was studied. A central composite factorial design was used in conjunction with the software BMDP and ANFIS Edit Matlab 6.5 to develop polynomial and fuzzy neural models that reproduced the experimental results of the dependent variables with errors less than 10%. Both types of models are therefore effective with a view to simulating the ethanolamine pulping process. Based on the proposed equations, the best choice is to use values of the operational valuables resulting in near-optimal pulp properties while saving energy and immobilized capital on industrial facilities by using lower temperatures and shorter processing times. One combination leading to near-optimal properties with reduced costs is using a temperature of 180 degrees C and an ethanolamine concentration of 60% for 60min, to obtain pulp with a viscosity of 6.13% lower than the maximum value (932.8ml/g) and a drainability of 5.49% lower than the maximum value (71 (o)SR).
Reichman, Rivka; Shirazi, Elham; Colliver, Donald G; Pennell, Kelly G
2017-02-22
Vapor intrusion (VI) is well-known to be difficult to characterize because indoor air (IA) concentrations exhibit considerable temporal and spatial variability in homes throughout impacted communities. To overcome this and other limitations, most VI science has focused on subsurface processes; however there is a need to understand the role of aboveground processes, especially building operation, in the context of VI exposure risks. This tutorial review focuses on building air exchange rates (AERs) and provides a review of literature related building AERs to inform decision making at VI sites. Commonly referenced AER values used by VI regulators and practitioners do not account for the variability in AER values that have been published in indoor air quality studies. The information presented herein highlights that seasonal differences, short-term weather conditions, home age and air conditioning status, which are well known to influence AERs, are also likely to influence IA concentrations at VI sites. Results of a 3D VI model in combination with relevant AER values reveal that IA concentrations can vary more than one order of magnitude due to air conditioning status and one order of magnitude due to house age. Collectively, the data presented strongly support the need to consider AERs when making decisions at VI sites.
Process optimization and analysis of microwave assisted extraction of pectin from dragon fruit peel.
Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J
2014-11-04
Microwave assisted extraction (MAE) technique was employed for the extraction of pectin from dragon fruit peel. The extracting parameters were optimized by using four-variable-three-level Box-Behnken design (BBD) coupled with response surface methodology (RSM). RSM analysis indicated good correspondence between experimental and predicted values. 3D response surface plots were used to study the interactive effects of process variables on extraction of pectin. The optimum extraction conditions for the maximum yield of pectin were power of 400 W, temperature of 45 °C, extracting time of 20 min and solid-liquid ratio of 24 g/mL. Under these conditions, 7.5% of pectin was extracted. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bannon, William
2015-04-01
Missing data typically refer to the absence of one or more values within a study variable(s) contained in a dataset. The development is often the result of a study participant choosing not to provide a response to a survey item. In general, a greater number of missing values within a dataset reflects a greater challenge to the data analyst. However, if researchers are armed with just a few basic tools, they can quite effectively diagnose how serious the issue of missing data is within a dataset, as well as prescribe the most appropriate solution. Specifically, the keys to effectively assessing and treating missing data values within a dataset involve specifying how missing data will be defined in a study, assessing the amount of missing data, identifying the pattern of the missing data, and selecting the best way to treat the missing data values. I will touch on each of these processes and provide a brief illustration of how the validity of study findings are at great risk if missing data values are not treated effectively. ©2015 American Association of Nurse Practitioners.
Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María
2018-04-01
Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.
Ito, Vanessa Mayumi; Batistella, César Benedito; Maciel, Maria Regina Wolf; Maciel Filho, Rubens
2007-04-01
Soybean oil deodorized distillate is a product derived from the refining process and it is rich in high value-added products. The recovery of these unsaponifiable fractions is of great commercial interest, because of the fact that in many cases, the "valuable products" have vitamin activities such as tocopherols (vitamin E), as well as anticarcinogenic properties such as sterols. Molecular distillation has large potential to be used in order to concentrate tocopherols, as it uses very low temperatures owing to the high vacuum and short operating time for separation, and also, it does not use solvents. Then, it can be used to separate and to purify thermosensitive material such as vitamins. In this work, the molecular distillation process was applied for tocopherol concentration, and the response surface methodology was used to optimize free fatty acids (FFA) elimination and tocopherol concentration in the residue and in the distillate streams, both of which are the products of the molecular distiller. The independent variables studied were feed flow rate (F) and evaporator temperature (T) because they are the very important process variables according to previous experience. The experimental range was 4-12 mL/min for F and 130-200 degrees C for T. It can be noted that feed flow rate and evaporator temperature are important operating variables in the FFA elimination. For decreasing the loss of FFA, in the residue stream, the operating range should be changed, increasing the evaporator temperature and decreasing the feed flow rate; D/F ratio increases, increasing evaporator temperature and decreasing feed flow rate. High concentration of tocopherols was obtained in the residue stream at low values of feed flow rate and high evaporator temperature. These results were obtained through experimental results based on experimental design.
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2016-12-01
Irrigation has considerably interfered with hydrological processes in arid and semi-arid areas with heavy irrigated agriculture. With the increasing demand for food production and evaporative demand due to climate change, irrigation water consumption is expected to increase, which would aggravate the interferences to hydrologic processes. Current studies focus on the impact of irrigation on the mean value of evapotranspiration (ET) at either local or regional scale, however, how irrigation changes the variability of ET has not been well understood. This study analyzes the impact of extensive irrigation on ET variability in the Northern High Plains. We apply an ET variance decomposition framework developed from our previous work to quantify the effects of both climate and irrigation on ET variance in the Northern High Plains watersheds. Based on climate and water table observations, we assess the monthly ET variance and its components for two periods: 1930s-1960s with less irrigation development 970s-2010s with more development. It is found that irrigation not only caused the well-recognized groundwater drawdown and stream depletion problems in the region, but also buffered ET variance from climatic fluctuations. In addition to increasing food productivity, irrigation also stabilizes crop yield by mitigating the impact of hydroclimatic variability. With complementary water supply from irrigation, ET often approaches to the potential ET, and thus the observed ET variance is more attributed to climatic variables especially temperature; meanwhile irrigation causes significant seasonal fluctuations to groundwater storage. For sustainable water resources management in the Northern High Plains, we argue that both the mean value and the variance of ET should be considered together for the regulation of irrigation in this region.
Effect of multiplicative noise on stationary stochastic process
NASA Astrophysics Data System (ADS)
Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.
2018-03-01
An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.
Beyer, Thomas; Lassen, Martin L; Boellaard, Ronald; Delso, Gaspar; Yaqub, Maqsood; Sattler, Bernhard; Quick, Harald H
2016-02-01
We assess inter- and intra-subject variability of magnetic resonance (MR)-based attenuation maps (MRμMaps) of human subjects for state-of-the-art positron emission tomography (PET)/MR imaging systems. Four healthy male subjects underwent repeated MR imaging with a Siemens Biograph mMR, Philips Ingenuity TF and GE SIGNA PET/MR system using product-specific MR sequences and image processing algorithms for generating MRμMaps. Total lung volumes and mean attenuation values in nine thoracic reference regions were calculated. Linear regression was used for comparing lung volumes on MRμMaps. Intra- and inter-system variability was investigated using a mixed effects model. Intra-system variability was seen for the lung volume of some subjects, (p = 0.29). Mean attenuation values across subjects were significantly different (p < 0.001) due to different segmentations of the trachea. Differences in the attenuation values caused noticeable intra-individual and inter-system differences that translated into a subsequent bias of the corrected PET activity values, as verified by independent simulations. Significant differences of MRμMaps generated for the same subjects but different PET/MR systems resulted in differences in attenuation correction factors, particularly in the thorax. These differences currently limit the quantitative use of PET/MR in multi-center imaging studies.
Notes from the field: the economic value chain in disease management organizations.
Fetterolf, Donald
2006-12-01
The disease management (DM) "value chain" is composed of a linear series of steps that include operational milestones in the development of knowledge, each stage evolving from the preceding one. As an adaptation of Michael Porter's "value chain" model, the process flow in DM moves along the following path: (1) data/information technology, (2) information generation, (3) analysis, (4) assessment/recommendations, (5) actionable customer plan, and (6) program assessment/reassessment. Each of these stages is managed as a major line of product operations within a DM company or health plan. Metrics around each of the key production variables create benchmark milestones, ongoing management insight into program effectiveness, and potential drivers for activity-based cost accounting pricing models. The value chain process must remain robust from early entry of data and information into the system, through the final presentation and recommendations for our clients if the program is to be effective. For individuals involved in the evaluation or review of DM programs, this framework is an excellent method to visualize the key components and sequence in the process. The value chain model is an excellent way to establish the value of a formal DM program and to create a consultancy relationship with a client involved in purchasing these complex services.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.
A meta-analysis of research on science teacher education practices associated with inquiry strategy
NASA Astrophysics Data System (ADS)
Sweitzer, Gary L.; Anderson, Ronald D.
A meta-analysis was conducted of studies of teacher education having as measured outcomes one or more variables associated with inquiry teaching. Inquiry addresses those teacher behaviors that facilitate student acquisition of concepts and processes through strategies such as problem solving, uses of evidence, logical and analytical reasoning, clarification of values, and decision making. Studies which contained sufficient data for the calculation of an effect size were coded for 114 variables. These variables were divided into the following six major categories: study information and design characteristics, teacher and teacher trainee characteristics, student characteristics, treatment description, outcome description, and effect size calculation. A total of 68 studies resulting in 177 effect size calculations were coded. Mean effect sizes broken across selected variables were calculated.
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.
Lumber Scanning System for Surface Defect Detection
D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1992-01-01
This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
Intercultural Competence in the Education Process
ERIC Educational Resources Information Center
Elosúa, Maria Rosa
2015-01-01
This paper offers a synthesis about the wide range of psychological variables which influence how a person adapts when they are in a new or unfamiliar cultural context. This study takes as its starting points the educational value of the intercultural experience and the need to study this experience from interdisciplinary and plural focuses that…
Analytic Networks in Music Task Definition.
ERIC Educational Resources Information Center
Piper, Richard M.
For a student to acquire the conceptual systems of a discipline, the designer must reflect that structure or analytic network in his curriculum. The four networks identified for music and used in the development of the Southwest Regional Laboratory (SWRL) Music Program are the variable-value, the whole-part, the process-stage, and the class-member…
Self-similarity in incompressible Navier-Stokes equations.
Ercan, Ali; Kavvas, M Levent
2015-12-01
The self-similarity conditions of the 3-dimensional (3D) incompressible Navier-Stokes equations are obtained by utilizing one-parameter Lie group of point scaling transformations. It is found that the scaling exponents of length dimensions in i = 1, 2, 3 coordinates in 3-dimensions are not arbitrary but equal for the self-similarity of 3D incompressible Navier-Stokes equations. It is also shown that the self-similarity in this particular flow process can be achieved in different time and space scales when the viscosity of the fluid is also scaled in addition to other flow variables. In other words, the self-similarity of Navier-Stokes equations is achievable under different fluid environments in the same or different gravity conditions. Self-similarity criteria due to initial and boundary conditions are also presented. Utilizing the proposed self-similarity conditions of the 3D hydrodynamic flow process, the value of a flow variable at a specified time and space can be scaled to a corresponding value in a self-similar domain at the corresponding time and space.
Approximate simulation model for analysis and optimization in engineering system design
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
Computational support of the engineering design process routinely requires mathematical models of behavior to inform designers of the system response to external stimuli. However, designers also need to know the effect of the changes in design variable values on the system behavior. For large engineering systems, the conventional way of evaluating these effects by repetitive simulation of behavior for perturbed variables is impractical because of excessive cost and inadequate accuracy. An alternative is described based on recently developed system sensitivity analysis that is combined with extrapolation to form a model of design. This design model is complementary to the model of behavior and capable of direct simulation of the effects of design variable changes.
Boosting quantum annealer performance via sample persistence
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili
2017-07-01
We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
A stochastic-geometric model of soil variation in Pleistocene patterned ground
NASA Astrophysics Data System (ADS)
Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc
2013-04-01
In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.
NASA Astrophysics Data System (ADS)
Korres, W.; Reichenau, T. G.; Schneider, K.
2013-08-01
Soil moisture is a key variable in hydrology, meteorology and agriculture. Soil moisture, and surface soil moisture in particular, is highly variable in space and time. Its spatial and temporal patterns in agricultural landscapes are affected by multiple natural (precipitation, soil, topography, etc.) and agro-economic (soil management, fertilization, etc.) factors, making it difficult to identify unequivocal cause and effect relationships between soil moisture and its driving variables. The goal of this study is to characterize and analyze the spatial and temporal patterns of surface soil moisture (top 20 cm) in an intensively used agricultural landscape (1100 km2 northern part of the Rur catchment, Western Germany) and to determine the dominant factors and underlying processes controlling these patterns. A second goal is to analyze the scaling behavior of surface soil moisture patterns in order to investigate how spatial scale affects spatial patterns. To achieve these goals, a dynamically coupled, process-based and spatially distributed ecohydrological model was used to analyze the key processes as well as their interactions and feedbacks. The model was validated for two growing seasons for the three main crops in the investigation area: Winter wheat, sugar beet, and maize. This yielded RMSE values for surface soil moisture between 1.8 and 7.8 vol.% and average RMSE values for all three crops of 0.27 kg m-2 for total aboveground biomass and 0.93 for green LAI. Large deviations of measured and modeled soil moisture can be explained by a change of the infiltration properties towards the end of the growing season, especially in maize fields. The validated model was used to generate daily surface soil moisture maps, serving as a basis for an autocorrelation analysis of spatial patterns and scale. Outside of the growing season, surface soil moisture patterns at all spatial scales depend mainly upon soil properties. Within the main growing season, larger scale patterns that are induced by soil properties are superimposed by the small scale land use pattern and the resulting small scale variability of evapotranspiration. However, this influence decreases at larger spatial scales. Most precipitation events cause temporarily higher surface soil moisture autocorrelation lengths at all spatial scales for a short time even beyond the autocorrelation lengths induced by soil properties. The relation of daily spatial variance to the spatial scale of the analysis fits a power law scaling function, with negative values of the scaling exponent, indicating a decrease in spatial variability with increasing spatial resolution. High evapotranspiration rates cause an increase in the small scale soil moisture variability, thus leading to large negative values of the scaling exponent. Utilizing a multiple regression analysis, we found that 53% of the variance of the scaling exponent can be explained by a combination of an independent LAI parameter and the antecedent precipitation.
Arabi, Simin; Sohrabi, Mahmoud Reza
2013-01-01
In this study, NZVI particles was prepared and studied for the removal of vat green 1 dye from aqueous solution. A four-factor central composite design (CCD) combined with response surface modeling (RSM) to evaluate the combined effects of variables as well as optimization was employed for maximizing the dye removal by prepared NZVI based on 30 different experimental data obtained in a batch study. Four independent variables, viz. NZVI dose (0.1-0.9 g/L), pH (1.5-9.5), contact time (20-100 s), and initial dye concentration (10-50 mg/L) were transform to coded values and quadratic model was built to predict the responses. The significant of independent variables and their interactions were tested by the analysis of variance (ANOVA). Adequacy of the model was tested by the correlation between experimental and predicted values of the response and enumeration of prediction errors. The ANOVA results indicated that the proposed model can be used to navigate the design space. Optimization of the variables for maximum adsorption of dye by NZVI particles was performed using quadratic model. The predicted maximum adsorption efficiency (96.97%) under the optimum conditions of the process variables (NZVI dose 0.5 g/L, pH 4, contact time 60 s, and initial dye concentration 30 mg/L) was very close to the experimental value (96.16%) determined in batch experiment. In the optimization, R2 and R2adj correlation coefficients for the model were evaluated as 0.95 and 0.90, respectively.
High-Throughput RNA Interference Screening: Tricks of the Trade
Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile
2016-01-01
The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418
NASA Astrophysics Data System (ADS)
Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.
2006-05-01
High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Proxemics in Couple Interactions: Rekindling an Old Optic.
Sluzki, Carlos E
2016-03-01
Utilizing as a lens the interpersonal implications of physical interpersonal distances in social contexts (a set of variables present during the professional discourse during the 1960s and 1970s, to then fade away), this article explores interactive process displayed by the protagonic couple in Bela Bartok's opera "Bluebeard Castle," an exercise aimed at underlining the value of maintaining proxemics as an explicit level of observation for clinical practice and interpersonal research. © 2015 Family Process Institute.
Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.
Störmer, Viola; Eppinger, Ben; Li, Shu-Chen
2014-06-01
Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.
A comparative analysis of errors in long-term econometric forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tepel, R.
1986-04-01
The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less
Multi-objective optimization for model predictive control.
Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry
2007-06-01
This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed
2014-03-19
In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X₁); the flow rate (X₂); the catalyst weight (X₃); the catalyst loading (X₄) and the glycerol-water molar ratio (X₅) on the H₂ yield (Y₁) and the conversion of glycerol to gaseous products (Y₂) were explored. Using multiple regression analysis; the experimental results of the H₂ yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H₂ yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t -test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied.
Kim, Christopher S.; Hayman, James A.; Billi, John E.; Lash, Kathy; Lawrence, Theodore S.
2007-01-01
Purpose Patients with bone and brain metastases are among the most symptomatic nonemergency patients treated by radiation oncologists. Treatment should begin as soon as possible after the request is generated. We tested the hypothesis that the operational improvement method based on lean thinking could help streamline the treatment of our patients referred for bone and brain metastases. Methods University of Michigan Health System has adopted lean thinking as a consistent approach to quality and process improvement. We applied the principles and tools of lean thinking, especially value as defined by the customer, value stream mapping processes, and one piece flow, to improve the process of delivering care to patients referred for bone or brain metastases. Results and Conclusion The initial evaluation of the process revealed that it was rather chaotic and highly variable. Implementation of the lean thinking principles permitted us to improve the process by cutting the number of individual steps to begin treatment from 27 to 16 and minimize variability by applying standardization. After an initial learning period, the percentage of new patients with brain or bone metastases receiving consultation, simulation, and treatment within the same day rose from 43% to nearly 95%. By implementing the ideas of lean thinking, we improved the delivery of clinical care for our patients with bone or brain metastases. We believe these principles can be applied to much of the care administered throughout our and other health care delivery areas. PMID:20859409
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Kim, Christopher S; Hayman, James A; Billi, John E; Lash, Kathy; Lawrence, Theodore S
2007-07-01
Patients with bone and brain metastases are among the most symptomatic nonemergency patients treated by radiation oncologists. Treatment should begin as soon as possible after the request is generated. We tested the hypothesis that the operational improvement method based on lean thinking could help streamline the treatment of our patients referred for bone and brain metastases. University of Michigan Health System has adopted lean thinking as a consistent approach to quality and process improvement. We applied the principles and tools of lean thinking, especially value as defined by the customer, value stream mapping processes, and one piece flow, to improve the process of delivering care to patients referred for bone or brain metastases. The initial evaluation of the process revealed that it was rather chaotic and highly variable. Implementation of the lean thinking principles permitted us to improve the process by cutting the number of individual steps to begin treatment from 27 to 16 and minimize variability by applying standardization. After an initial learning period, the percentage of new patients with brain or bone metastases receiving consultation, simulation, and treatment within the same day rose from 43% to nearly 95%. By implementing the ideas of lean thinking, we improved the delivery of clinical care for our patients with bone or brain metastases. We believe these principles can be applied to much of the care administered throughout our and other health care delivery areas.
Suspect/foil identification in actual crimes and in the laboratory: a reality monitoring analysis.
Behrman, Bruce W; Richards, Regina E
2005-06-01
Four reality monitoring variables were used to discriminate suspect from foil identifications in 183 actual criminal cases. Four hundred sixty-one identification attempts based on five and six-person lineups were analyzed. These identification attempts resulted in 238 suspect identifications and 68 foil identifications. Confidence, automatic processing, eliminative processing and feature use comprised the set of reality monitoring variables. Thirty-five verbal confidence phrases taken from police reports were assigned numerical values on a 10-point confidence scale. Automatic processing identifications were those that occurred "immediately" or "without hesitation." Eliminative processing identifications occurred when witnesses compared or eliminated persons in the lineups. Confidence, automatic processing and eliminative processing were significant predictors, but feature use was not. Confidence was the most effective discriminator. In cases that involved substantial evidence extrinsic to the identification 43% of the suspect identifications were made with high confidence, whereas only 10% of the foil identifications were made with high confidence. The results of a laboratory study using the same predictors generally paralleled the archival results. Forensic implications are discussed.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.
Mendez, Carlos E; Ata, Ashar; Rourke, Joanne M; Stain, Steven C; Umpierrez, Guillermo
2015-08-01
Hyperglycemia, hypoglycemia, and glycemic variability have been associated with increased morbidity, mortality, and overall costs of care in hospitalized patients. At the Stratton VA Medical Center in Albany, New York, a process aimed to improve inpatient glycemic control by remotely assisting primary care teams in the management of hyperglycemia and diabetes was designed. An electronic query comprised of hospitalized patients with glucose values <70 mg/dL or >350 mg/dL is generated daily. Electronic medical records (EMRs) are individually reviewed by diabetes specialist providers, and management recommendations are sent to primary care teams when applicable. Glucose data was retrospectively examined before and after the establishment of the daily inpatient glycemic survey (DINGS) process, and rates of hyperglycemia and hypoglycemia were compared. Patient-day mean glucose slightly but significantly decreased from 177.6 ± 64.4 to 173.2 ± 59.4 mg/dL (P<.001). The percentage of patient-days with any value >350 mg/dL also decreased from 9.69 to 7.36% (P<.001), while the percentage of patient-days with mean glucose values in the range of 90 to 180 mg/dL increased from 58.1 to 61.4% (P<.001). Glycemic variability, assessed by the SD of glucose, significantly decreased from 53.9 to 49.8 mg/dL (P<.001). Moreover, rates of hypoglycemia (<70 mg/dL) decreased significantly by 41% (P<.001). Quality metrics of inpatient glycemic control improved significantly after the establishment of the DINGS process within our facility. Prospective controlled studies are needed to confirm a causal association.
Gutiérrez, M C; Siles, J A; Diz, J; Chica, A F; Martín, M A
2017-01-01
The composting process of six different compostable substrates and one of these with the addition of bacterial inoculums carried out in a dynamic respirometer was evaluated. Despite the heterogeneity of the compostable substrates, cumulative oxygen demand (OD, mgO 2 kgVS) was fitted adequately to an exponential regression growing until reaching a maximum in all cases. According to the kinetic constant of the reaction (K) values obtained, the wastes that degraded more slowly were those containing lignocellulosic material (green wastes) or less biodegradable wastes (sewage sludge). The odor emissions generated during the composting processes were also fitted in all cases to a Gaussian regression with R 2 values within the range 0.8-0.9. The model was validated representing real odor concentration near the maximum value against predicted odor concentration of each substrate, (R 2 =0.9314; 95% prediction interval). The variables of maximum odor concentration (ou E /m 3 ) and the time (h) at which the maximum was reached were also evaluated statistically using ANOVA and a post-hoc Tukey test taking the substrate as a factor, which allowed homogeneous groups to be obtained according to one or both of these variables. The maximum oxygen consumption rate or organic matter degradation during composting was directly related to the maximum odor emission generation rate (R 2 =0.9024, 95% confidence interval) when only the organic wastes with a low content in lignocellulosic materials and no inoculated waste (HRIO) were considered. Finally, the composting of OFMSW would produce a higher odor impact than the other substrates if this process was carried out without odor control or open systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mazerolle, Erin L; Wojtowicz, Magdalena A; Omisade, Antonina; Fisk, John D
2013-01-01
Slowed information processing speed is commonly reported in persons with multiple sclerosis (MS), and is typically investigated using clinical neuropsychological tests, which provide sensitive indices of mean-level information processing speed. However, recent studies have demonstrated that within-person variability or intra-individual variability (IIV) in information processing speed may be a more sensitive indicator of neurologic status than mean-level performance on clinical tests. We evaluated the neural basis of increased IIV in mildly affected relapsing-remitting MS patients by characterizing the relation between IIV (controlling for mean-level performance) and white matter integrity using diffusion tensor imaging (DTI). Twenty women with relapsing-remitting MS and 20 matched control participants completed the Computerized Test of Information Processing (CTIP), from which both mean response time and IIV were calculated. Other clinical measures of information processing speed were also collected. Relations between IIV on the CTIP and DTI metrics of white matter microstructure were evaluated using tract-based spatial statistics. We observed slower and more variable responses on the CTIP in MS patients relative to controls. Significant relations between white matter microstructure and IIV were observed for MS patients. Increased IIV was associated with reduced integrity in more white matter tracts than was slowed information processing speed as measured by either mean CTIP response time or other neuropsychological test scores. Thus, despite the common use of mean-level performance as an index of cognitive dysfunction in MS, IIV may be more sensitive to the overall burden of white matter disease at the microstructural level. Furthermore, our study highlights the potential value of considering within-person fluctuations, in addition to mean-level performance, for uncovering brain-behavior relationships in neurologic disorders with widespread white matter pathology.
The Approach to Study the Kama Reservoir Basin Deformation in the Zone of a Variable Backwater
NASA Astrophysics Data System (ADS)
Dvinskikh, S. A.; Kitaev, A. B.; Shaydulina, A. A.
2018-01-01
A reservoir floor starts to change since it has been filled up to a normal headwater level (NHL) under the impact of hydrosphere and lithosphere interactions as well as under the impact of chemical and biological processes that occur in its water masses. At that complicated and often contradictory “relations” between features of geo- and hydrodynamic processes are created. The consequences of these relations are the alterations of values of morphometric indices of the reservoir water surface, depth and volume. We observe two processes that are oppositely directed. They are accumulation and erosion. They are more complex at the upper area of the reservoir - the zone of a variable backwater. The basin deformation observed there is lop-sided and relatively quiet, but with time these deformations make difficulties for water users. To provide good navigation and to reduce harmful effect of waters on other water consumption objects, it is necessary to study and to forecast constantly the basin transformation processes that occur at this zone.
NASA Technical Reports Server (NTRS)
Basili, Victor R.
1992-01-01
The concepts of quality improvements have permeated many businesses. It is clear that the nineties will be the quality era for software and there is a growing need to develop or adapt quality improvement approaches to the software business. Thus we must understand software as an artifact and software as a business. Since the business we are dealing with is software, we must understand the nature of software and software development. The software discipline is evolutionary and experimental; it is a laboratory science. Software is development not production. The technologies of the discipline are human based. There is a lack of models that allow us to reason about the process and the product. All software is not the same; process is a variable, goals are variable, etc. Packaged, reusable, experiences require additional resources in the form of organization, processes, people, etc. There have been a variety of organizational frameworks proposed to improve quality for various businesses. The ones discussed in this presentation include: Plan-Do-Check-Act, a quality improvement process based upon a feedback cycle for optimizing a single process model/production line; the Experience Factory/Quality Improvement Paradigm, continuous improvements through the experimentation, packaging, and reuse of experiences based upon a business's needs; Total Quality Management, a management approach to long term success through customer satisfaction based on the participation of all members of an organization; the SEI capability maturity model, a staged process improvement based upon assessment with regard to a set of key process areas until you reach a level 5 which represents a continuous process improvement; and Lean (software) Development, a principle supporting the concentration of the production on 'value added' activities and the elimination of reduction of 'not value added' activities.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael
2007-01-01
Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141
High δ56Fe values in Samoan basalts
NASA Astrophysics Data System (ADS)
Konter, J. G.; Pietruszka, A. J.; Hanan, B. B.; Finlayson, V.
2014-12-01
Fe isotope fractionation spans ~0-0.4 permil in igneous systems, which cannot all be attributed to variable source compositions since peridotites barely overlap these compositions. Other processes may fractionate Fe isotopes such as variations in the degree of partial melting, magmatic differentiation, fluid addition related to the final stages of melt evolution, and kinetic fractionation related to diffusion. An important observation in igneous systems is the trend of increasing Fe isotope values against an index of magmatic fractionation (e.g. SiO2; [1]). The data strongly curve from δ56Fe >0.3 permil for SiO2 >70 wt% down to values around 0.09 permil from ~65 wt% down to 40 wt% SiO2 of basalts. However, ocean island basalts (OIBs) have a slightly larger δ56Fe variability than mid ocean ridge basalts (MORBs; [e.g. 2]). We present Fe isotope data on samples from the Samoan Islands (OIB) that have unusually high δ56Fe values for their SiO2 content. We rule out alteration by using fresh samples, and further test for the effects of magmatic processes on the δ56Fe values. In order to model the largest possible fractionation, unusually small degrees of melting with extreme fractionation factors are modeled with fractional crystallization of olivine alone, but such processing fails to fractionate the Fe isotopes to the observed values. Moreover, Samoan lavas likely also fractionated clinopyroxene, and its lower fractionation factor would limit the final δ56Fe value of the melt. We therefore suggest the mantle source of Samoan lavas must have had unusually high δ56Fe. However, there is no clear correlation with the highly radiogenic isotope signatures that reflect the unique source compositions of Samoa. Instead, increasing melt extraction correlates with lower δ56Fe values in peridotites assumed to be driven by the preference for the melt phase by heavy Fe3+, while high values may be related to metasomatism [3]. The latter would be in line with metasomatized xenoliths from Samoa [4]. [1] Heimann et al., 2008, doi:10.1016/j.gca.2008.06.009 [2] Teng et al., 2013, doi:10.1016/j.gca.2012.12.027 [3] Williams et al., 2004, doi: 10.1126/science.1095679 [4] Hauri et al., 1993, doi: 10.1038/365221a0
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
NASA Astrophysics Data System (ADS)
Danczyk, Jennifer; Wollocko, Arthur; Farry, Michael; Voshell, Martin
2016-05-01
Data collection processes supporting Intelligence, Surveillance, and Reconnaissance (ISR) missions have recently undergone a technological transition accomplished by investment in sensor platforms. Various agencies have made these investments to increase the resolution, duration, and quality of data collection, to provide more relevant and recent data to warfighters. However, while sensor improvements have increased the volume of high-resolution data, they often fail to improve situational awareness and actionable intelligence for the warfighter because it lacks efficient Processing, Exploitation, and Dissemination and filtering methods for mission-relevant information needs. The volume of collected ISR data often overwhelms manual and automated processes in modern analysis enterprises, resulting in underexploited data, insufficient, or lack of answers to information requests. The outcome is a significant breakdown in the analytical workflow. To cope with this data overload, many intelligence organizations have sought to re-organize their general staffing requirements and workflows to enhance team communication and coordination, with hopes of exploiting as much high-value data as possible and understanding the value of actionable intelligence well before its relevance has passed. Through this effort we have taken a scholarly approach to this problem by studying the evolution of Processing, Exploitation, and Dissemination, with a specific focus on the Army's most recent evolutions using the Functional Resonance Analysis Method. This method investigates socio-technical processes by analyzing their intended functions and aspects to determine performance variabilities. Gaps are identified and recommendations about force structure and future R and D priorities to increase the throughput of the intelligence enterprise are discussed.
Shared neural coding for social hierarchy and reward value in primate amygdala.
Munuera, Jérôme; Rigotti, Mattia; Salzman, C Daniel
2018-03-01
The social brain hypothesis posits that dedicated neural systems process social information. In support of this, neurophysiological data have shown that some brain regions are specialized for representing faces. It remains unknown, however, whether distinct anatomical substrates also represent more complex social variables, such as the hierarchical rank of individuals within a social group. Here we show that the primate amygdala encodes the hierarchical rank of individuals in the same neuronal ensembles that encode the rewards associated with nonsocial stimuli. By contrast, orbitofrontal and anterior cingulate cortices lack strong representations of hierarchical rank while still representing reward values. These results challenge the conventional view that dedicated neural systems process social information. Instead, information about hierarchical rank-which contributes to the assessment of the social value of individuals within a group-is linked in the amygdala to representations of rewards associated with nonsocial stimuli.
The role of utility value in achievement behavior: the importance of culture.
Shechter, Olga G; Durik, Amanda M; Miyamoto, Yuri; Harackiewicz, Judith M
2011-03-01
Two studies tested how participants' responses to utility value interventions and subsequent interest in a math technique vary by culture (Westerners vs. East Asians) and levels of initial math interest. Participants in Study 1 were provided with information about the utility value of the technique or not. The manipulation was particularly effective for East Asian learners with initially lower math interest, who showed more interest in the technique relative to low-interest Westerners. Study 2 compared the effects of two types of utility value (proximal or distal) and examined the effects on interest, effort, performance, and process variables. Whereas East Asian participants reaped the most motivational benefits from a distal value manipulation, Westerners benefited the most from a proximal value manipulation. These findings have implications for how to promote motivation for learners with different cultural backgrounds and interests.
Validity of a portable glucose, total cholesterol, and triglycerides multi-analyzer in adults.
Coqueiro, Raildo da Silva; Santos, Mateus Carmo; Neto, João de Souza Leal; Queiroz, Bruno Morbeck de; Brügger, Nelson Augusto Jardim; Barbosa, Aline Rodrigues
2014-07-01
This study investigated the accuracy and precision of the Accutrend Plus system to determine blood glucose, total cholesterol, and plasma triglycerides in adults and evaluated its efficiency in measuring these blood variables. The sample consisted of 53 subjects (≥ 18 years). For blood variable laboratory determination, venous blood samples were collected and processed in a Labmax 240 analyzer. To measure blood variables with the Accutrend Plus system, samples of capillary blood were collected. In the analysis, the following tests were included: Wilcoxon and Student's t-tests for paired samples, Lin's concordance coefficient, Bland-Altman method, receiver operating characteristic curve, McNemar test, and k statistics. The results show that the Accutrend Plus system provided significantly higher values (p ≤ .05) of glucose and triglycerides but not of total cholesterol (p > .05) as compared to the values determined in the laboratory. However, the system showed good reproducibility (Lin's coefficient: glucose = .958, triglycerides = .992, total cholesterol = .940) and high concordance with the laboratory method (Lin's coefficient: glucose = .952, triglycerides = .990, total cholesterol = .944) and high sensitivity (glucose = 80.0%, triglycerides = 90.5%, total cholesterol = 84.4%) and specificity (glucose = 100.0%, triglycerides = 96.9%, total cholesterol = 95.2%) in the discrimination of high values of the three blood variables analyzed. It could be concluded that despite the tendency to overestimate glucose and triglyceride levels, a portable multi-analyzer is a valid alternative for the monitoring of metabolic disorders and cardiovascular risk factors. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Valeriano, Márcio de Morisson; Rossetti, Dilce de Fátima
2017-03-01
This paper reports procedures to prepare locally derived geomorphometric data for geological mapping at regional scale in central Amazônia. The size of the study area, approximately 1.5 million km2, and the prevailing flat topography of the targeted environment were the constraints motivating the aims, at spatial and numerical synthesis of the detailed geomorphometric information derived from SRTM DEM. The developed approach consisted in assigning single (average) values to terrain patches, to represent the regional distribution of pixel-based geomorphometric information (slope, profile curvature and relative relief). In analogy to the nature of sedimentary packs, patches were established as contiguous elevation strata, constructed through a procedure combining segmentation, filterings and range compressions. For slope only, pre-processing of locally derived data with median filtering effectively avoided the typical flattening of the regionalized results due to input distribution characteristics. Profile curvature was transformed into absolute values and thus a different meaning from the original (pixel) variable was considered in the interpretation, also avoiding the compensation of original values (positive and negative) tending to zero value when averaged through a regionally flat extension. Examinations near major river valleys showed patched elevation to depict alluvial terraces. In the interfluves and floodplains, contrasting patterns in the averaged variables among patches of similar elevations allowed the recognition of important relief features. In addition to the reduction of the distribution ranges, the correlation between regionalized geomorphometric variables was higher than observed in the originally local data, due to the thematic synthesis following regionalization. Depth of dissection, claimed to be related to the relative age of sedimentary units, was the main factor to explain the overall variations of the geomorphometric results. The developed regionalization process improved the potential of local geomorphometric data for updating and revision of geological maps and for guiding future surveys in the sedimentary domain of Amazônia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya
Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio,more » barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated that WS is mainly governed by barrel temperature and feed moisture content, which might have resulted in formation of starch-protein complexes due to denaturation of protein and gelatinization of starch. Screw speed coupled with temperature and feed moisture content controlled the ER and TD values. Higher screw speeds might have reduced the viscosity of the feed dough resulting in higher TD and lower ER values. Based on RSM and GA analysis screw speed, barrel temperature and feed moisture content were the interacting process variables influencing maximum WS followed by ER and TD.« less
Multi-Stage Mental Process for Economic Choice in Capuchins
ERIC Educational Resources Information Center
Padoa-Schioppa, Camillo; Jandolo, Lucia; Visalberghi, Elisabetta
2006-01-01
We studied economic choice behavior in capuchin monkeys by offering them to choose between two different foods available in variable amounts. When monkeys selected between familiar foods, their choice patterns were well-described in terms of relative value of the two foods. A leading view in economics and biology is that such behavior results from…
1975-10-01
DC anodizing all adhesion values were lower but almost equal. 36 mnamnminmh TABU X SWOT OF EFFECT OF CURRaTT DEÄITT, TIME ABD SEAUK OF CHJOOC...Continuum Interpretation for Fracture and Adhesion", J. Appl . Polymer Science, 1^, 29 (I969) 3. Williams, M. L., "Stress Singularities, Adhesion, and
Tessaro, Irene; Modina, Silvia C; Crotti, Gabriella; Franciosi, Federica; Colleoni, Silvia; Lodde, Valentina; Galli, Cesare; Lazzari, Giovanna; Luciano, Alberto M
2015-01-01
The dramatic increase in the number of animals required for reproductive toxicity testing imposes the validation of alternative methods to reduce the use of laboratory animals. As we previously demonstrated for in vitro maturation test of bovine oocytes, the present study describes the transferability assessment and the inter-laboratory variability of an in vitro test able to identify chemical effects during the process of bovine oocyte fertilization. Eight chemicals with well-known toxic properties (benzo[a]pyrene, busulfan, cadmium chloride, cycloheximide, diethylstilbestrol, ketoconazole, methylacetoacetate, mifepristone/RU-486) were tested in two well-trained laboratories. The statistical analysis demonstrated no differences in the EC50 values for each chemical in within (inter-runs) and in between-laboratory variability of the proposed test. We therefore conclude that the bovine in vitro fertilization test could advance toward the validation process as alternative in vitro method and become part of an integrated testing strategy in order to predict chemical hazards on mammalian fertility. Copyright © 2015 Elsevier Inc. All rights reserved.
Outsourcing decision factors in publicly owned electric utilities
NASA Astrophysics Data System (ADS)
Gonzales, James Edward
Purpose. The outsourcing of services in publicly owned electric utilities has generated some controversy. The purpose of this study was to explore this controversy by investigating the relationships between eight key independent variables and a dependent variable, "manager perceptions of overall value of outsourced services." The intent was to provide data so that utilities could make better decisions regarding outsourcing efforts. Theoretical framework. Decision theory was used as the framework for analyzing variables and alternatives used to support the outsourcing decision-making process. By reviewing these eight variables and the projected outputs and outcomes, a more predictive and potentially successful outsourcing effort can be realized. Methodology. A survey was distributed to a sample of 323 publicly owned electric utilities randomly selected from a population of 2,020 in the United States. Analysis of the data was made using statistical techniques including the Chi-Square, Lambda, Spearman's coefficient of rank correlation, as well as the Hypothesis Test, Rank Correlation, to test for relationships among the variables. Findings. Relationships among the eight key variables and perceptions of the overall value of outsourced services were generally weak. The notable exception was with the driving force (reason) for outsourcing decisions where the relationship was strongly positive. Conclusions and recommendations. The data in support of the research questions suggest that seven of the eight key variables may be weakly predictive of perceptions of the overall value of outsourced services. However, the primary driving force for outsourcing was strongly predictive. The data also suggest that many of the sampled utilities did not formally address these variables and alternatives, and therefore may not be achieving maximal results. Further studies utilizing customer perceptions rather than those of outsourcing service managers are recommended. In addition, it is recommended that a smaller sample population be analyzed after identifying one or more champions to ensure cooperation and legitimacy of data. Finally, this study supports the position that a manager's ability to identify and understand the relationships between these eight key variables and desired outcomes and outputs may contribute to more successful outsourcing operations.
Mathematical Modeling of Resonant Processes in Confined Geometry of Atomic and Atom-Ion Traps
NASA Astrophysics Data System (ADS)
Melezhik, Vladimir S.
2018-02-01
We discuss computational aspects of the developed mathematical models for resonant processes in confined geometry of atomic and atom-ion traps. The main attention is paid to formulation in the nondirect product discrete-variable representation (npDVR) of the multichannel scattering problem with nonseparable angular part in confining traps as the boundary-value problem. Computational efficiency of this approach is demonstrated in application to atomic and atom-ion confinement-induced resonances we predicted recently.
Modeling the filament winding process
NASA Technical Reports Server (NTRS)
Calius, E. P.; Springer, G. S.
1985-01-01
A model is presented which can be used to determine the appropriate values of the process variables for filament winding a cylinder. The model provides the cylinder temperature, viscosity, degree of cure, fiber position and fiber tension as functions of position and time during the filament winding and subsequent cure, and the residual stresses and strains within the cylinder during and after the cure. A computer code was developed to obtain quantitative results. Sample results are given which illustrate the information that can be generated with this code.
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
NASA Astrophysics Data System (ADS)
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2017-03-01
The resistance of polymeric materials to time-dependent plastic deformation is an important requirement of the fused deposition modeling (FDM) design process, its processed products, and their application for long-term loading, durability, and reliability. The creep performance of the material and part processed by FDM is the fundamental criterion for many applications with strict dimensional stability requirements, including medical implants, electrical and electronic products, and various automotive applications. Herein, the effect of FDM fabrication conditions on the flexural creep stiffness behavior of polycarbonate-acrylonitrile-butadiene-styrene processed parts was investigated. A relatively new class of experimental design called "definitive screening design" was adopted for this investigation. The effects of process variables on flexural creep stiffness behavior were monitored, and the best suited quadratic polynomial model with high coefficient of determination ( R 2) value was developed. This study highlights the value of response surface definitive screening design in optimizing properties for the products and materials, and it demonstrates its role and potential application in material processing and additive manufacturing.
Risk assessment of metal vapor arcing
NASA Technical Reports Server (NTRS)
Hill, Monika C. (Inventor); Leidecker, Henning W. (Inventor)
2009-01-01
A method for assessing metal vapor arcing risk for a component is provided. The method comprises acquiring a current variable value associated with an operation of the component; comparing the current variable value with a threshold value for the variable; evaluating compared variable data to determine the metal vapor arcing risk in the component; and generating a risk assessment status for the component.
Reference-dependent risk sensitivity as rational inference.
Denrell, Jerker C
2015-07-01
Existing explanations of reference-dependent risk sensitivity attribute it to cognitive imperfections and heuristic choice processes. This article shows that behavior consistent with an S-shaped value function could be an implication of rational inferences about the expected values of alternatives. Theoretically, I demonstrate that even a risk-neutral Bayesian decision maker, who is uncertain about the reliability of observations, should use variability in observed outcomes as a predictor of low expected value for outcomes above a reference level, and as a predictor of high expected value for outcomes below a reference level. Empirically, I show that combining past outcomes using an S-shaped value function leads to accurate predictions about future values. The theory also offers a rationale for why risk sensitivity consistent with an inverse S-shaped value function should occur in experiments on decisions from experience with binary payoff distributions. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Mincks, Sarah L.; Smith, Craig R.; Jeffreys, Rachel M.; Sumida, Paulo Y. G.
2008-11-01
Summer bloom-derived phytodetritus settles rapidly to the seafloor on the West Antarctic Peninsula (WAP) continental shelf, where it appears to degrade relatively slowly, forming a sediment "food bank" for benthic detritivores. We used stable carbon and nitrogen isotopes to examine sources and sinks of particulate organic material (POM) reaching the WAP shelf benthos (550-625 m depths), and to explore trophic linkages among the most abundant benthic megafauna. We measured δ 13C and δ 15N values in major megafaunal taxa ( n=26) and potential food sources, including suspended and sinking POM, ice algae, sediment organic carbon, phytodetritus, and macrofaunal polychaetes. The range in δ 13C values (>14‰) of suspended POM was considerably broader than in sedimentary POC, where little temporal variability in stable isotope signatures was observed. While benthic megafauna also exhibited a broad range of δ 13C values, organic carbon entering the benthic food web appeared to be derived primarily from phytoplankton production, with little input from ice algae. One group of organisms, primarily deposit-feeders, appeared to rely on fresh phytodetritus recovered from the sediments, and sediment organic material that had been reworked by sediment microbes. A second group of animals, including many mobile invertebrate and fish predators, appeared to utilize epibenthic or pelagic food resources such as zooplankton. One surface-deposit-feeding holothurian ( Protelpidia murrayi) exhibited seasonal variability in stable isotope values of body tissue, while other surface- and subsurface-deposit-feeders showed no evidence of seasonal variability in food source or trophic position. Detritus from phytoplankton blooms appears to be the primary source of organic material for the detritivorous benthos; however, seasonal variability in the supply of this material is not mirrored in the sediments, and only to a minor degree in the benthic fauna. This pattern suggests substantial inertia in benthic-pelagic coupling, whereby the sediment ecosystem integrates long-term variability in production processes in the water column above.
NASA Astrophysics Data System (ADS)
Domonik, A.; Słaby, E.; Śmigielski, M.
2012-04-01
A self-similarity parameter, the Hurst exponent (H) (called also roughness exponent) has been used to show the long-range dependence of element behaviour during the processes. The H value ranges between 0 and 1; a value of 0.5 indicates a random distribution indistinguishable from noise. For values greater or less than 0.5, the system shows non-linear dynamics. H < 0.5 represents anti-persistent (more chaotic) behaviour, whereas H > 0.5 corresponds to increasing persistence (less chaotic). Such persistence is characterized as an effect of a long-term memory, and thus by a large degree of positive correlation. In theory, the preceding data constantly affect the next in the whole temporal series. Applied to chaotic dynamics, the system shows a subtle sensitivity to initial conditions. The process can show some degree of chaos, due to local variations, but generally, the trend preserves its persistent character through time. If the exponent value is low, the process shows frequent and sudden reversals e.g. the trends of such a process show mutual negative correlation of the succeding values in the data series. Thus, the system can be described as having a high degree of deterministic chaos. Alkali feldspar megacrysts grown from mixed magmas and recrystallized due to interaction with fluids have been selected for the study (Słaby et al., 2011). Hurst exponent variability has been calculated within some primary-magmatic and secondary-recrystallized crystal domains for some elements redistributed by crystal fluid interaction. Based on the Hurst exponent value two different processes can easily be recognized. In the core of the megacrysts the element distribution can be ascribed to magmatic growth. By contrast, the marginal zones can relate to inferred late crystal-fluid interactions. Both processes are deterministic, not random. The spatial distribution of elements in the crystal margins is irregular, with high-H values identifying the process as persistent. The trace element distributions in feldspar cores are almost homogeneous and only relatively small and irregular variations in trace element contents makes their growth morphology slightly patchy. Despite homogenization the fractal statistics reveal that trace elements were incorporated chaotically into the growing crystal. The anti-persistent chaotic behaviour of elements during magmatic growth of the feldspars progressively changes into persistent behaviour within domains, where re-crystallization reaction took place. Elements demonstrate variable dynamics of this exchange corresponding to increasing persistency. This dynamics is different for individual elements compared to analogical, observed for crystallization process proceeding from mixed magmas. Consequently, it appears that fractal statistics clearly discriminate between two different processes, with contrasted element behaviour during these processes. One process is magma crystallization and it is recorded in the core of the megacrysts; the second is recorded in the crystal rims and along cleavages and cracks, such that it can be related to a post-crystallization process linked to fluid percolation. Słaby, E., Martin, H., Hamada, M., Śmigielski, M., Domonik, A., Götze, J., Hoefs, J., Hałas, S., Simon, K., Devidal, J-L., Moyen, J-F., Jayananda, M. (2011) Evidence in Archaean alkali-feldspar megacrysts for high-temperature interaction with mantle fluids. Journal of Petrology (on line). doi:10.1093/petrology/egr056
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Abd Rashid, Amirul; Hayati Saad, Nor; Bien Chia Sheng, Daniel; Yee, Lee Wai
2014-06-01
PH value is one of the important variables for tungsten trioxide (WO3) nanostructure hydrothermal synthesis process. The morphology of the synthesized nanostructure can be properly controlled by measuring and controlling the pH value of the solution used in this facile synthesis route. Therefore, it is very crucial to ensure the gauge used for pH measurement is reliable in order to achieve the expected result. In this study, gauge repeatability and reproducibility (GR&R) method was used to assess the repeatability and reproducibility of the pH tester. Based on ANOVA method, the design of experimental metrics as well as the result of the experiment was analyzed using Minitab software. It was found that the initial GR&R value for the tester was at 17.55 % which considered as acceptable. To further improve the GR&R level, a new pH measuring procedure was introduced. With the new procedure, the GR&R value was able to be reduced to 2.05%, which means the tester is statistically very ideal to measure the pH of the solution prepared for WO3 hydrothermal synthesis process.
Intersubject Variability in Fearful Face Processing: The Link Between Behavior and Neural Activation
Doty, Tracy J.; Japee, Shruti; Ingvar, Martin; Ungerleider, Leslie G.
2014-01-01
Stimuli that signal threat show considerable variability in the extent to which they enhance behavior, even among healthy individuals. However, the neural underpinning of this behavioral variability is not well understood. By manipulating expectation of threat in an fMRI study of fearful vs. neutral face categorization, we uncovered a network of areas underlying variability in threat processing in healthy adults. We explicitly altered expectation by presenting face images at three different expectation levels: 80%, 50%, and 20%. Subjects were instructed to report as fast and as accurately as possible whether the face was fearful (signaled threat) or not. An uninformative cue preceded each face by 4 seconds (s). By taking the difference between response times (RT) to fearful compared to neutral faces, we quantified an overall fear RT bias (i.e. faster to fearful than neutral faces) for each subject. This bias correlated positively with late trial fMRI activation (8 s after the face) during unexpected fearful face trials in bilateral ventromedial prefrontal cortex, the left subgenual cingulate cortex, and the right caudate nucleus and correlated negatively with early trial fMRI activation (4 s after the cue) during expected neutral face trials in bilateral dorsal striatum and the right ventral striatum. These results demonstrate that the variability in threat processing among healthy adults is reflected not only in behavior but also in the magnitude of activation in medial prefrontal and striatal regions that appear to encode affective value. PMID:24841078
Doty, Tracy J; Japee, Shruti; Ingvar, Martin; Ungerleider, Leslie G
2014-12-01
Stimuli that signal threat show considerable variability in the extents to which they enhance behavior, even among healthy individuals. However, the neural underpinning of this behavioral variability is not well understood. By manipulating expectation of threat in an fMRI study of fearful versus neutral face categorization, we uncovered a network of areas underlying variability in threat processing in healthy adults. We explicitly altered expectations by presenting face images at three different expectation levels: 80 %, 50 %, and 20 %. Subjects were instructed to report as quickly and accurately as possible whether the face was fearful (signaled threat) or not. An uninformative cue preceded each face by 4 s. By taking the difference between reaction times (RTs) to fearful and neutral faces, we quantified an overall fear RT bias (i.e., faster to fearful than to neutral faces) for each subject. This bias correlated positively with late-trial fMRI activation (8 s after the face) during unexpected-fearful-face trials in bilateral ventromedial prefrontal cortex, the left subgenual cingulate cortex, and the right caudate nucleus, and correlated negatively with early-trial fMRI activation (4 s after the cue) during expected-neutral-face trials in bilateral dorsal striatum and the right ventral striatum. These results demonstrate that the variability in threat processing among healthy adults is reflected not only in behavior, but also in the magnitude of activation in medial prefrontal and striatal regions that appear to encode affective value.
NASA Astrophysics Data System (ADS)
Rollion-Bard, C.; Erez, J.
2010-03-01
The boron isotope composition of marine carbonates is considered to be a seawater pH proxy. Nevertheless, the use of δ 11B has some limitations such as the knowledge of the fractionation factor ( α4-3) between boric acid and the borate ion and the amplitude of "vital effects" on this proxy that are not well constrained. Using secondary ion mass spectrometry (SIMS) we have examined the internal variability of the boron isotope ratio in the shallow water, symbionts bearing foraminiferan Amphistegina lobifera. Specimens were cultured at constant temperature (24 ± 0.1 °C) in seawater with pH ranging between 7.90 and 8.45. Intra-shell boron isotopes showed large variability with an upper limit value of ≈30‰. Our results suggest that the fractionation factor α4-3 of 0.97352 ( Klochko et al., 2006) is in better agreement with our experiments and with direct pH measurements in seawater vacuoles associated with the biomineralization process in these foraminifera. Despite the large variability of the skeletal pH values in each cultured specimen, it is possible to link the lowest calculated pH values to the experimental culture pH values while the upper pH limit is slightly below 9. This variability can be interpreted as follows: foraminifera variably increase the pH at the biomineralization site to about 9. This increase above ambient seawater pH leads to a range in δ 11B (Δ 11B) for each seawater pH. This Δ 11B is linearly correlated with the culture seawater pH with a slope of -13.1 per pH unit, and is independent of the fractionation factor α4-3, or the δ 11B sw through time. It may also be independent of the p KB (the dissociation constant of boric acid) value. Therefore, Δ 11B in foraminifera can potentially reconstruct paleo-pH of seawater.
STATISTICAL ANALYSIS OF SNAP 10A THERMOELECTRIC CONVERTER ELEMENT PROCESS DEVELOPMENT VARIABLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitch, S.H.; Morris, J.W.
1962-12-15
Statistical analysis, primarily analysis of variance, was applied to evaluate several factors involved in the development of suitable fabrication and processing techniques for the production of lead telluride thermoelectric elements for the SNAP 10A energy conversion system. The analysis methods are described as to their application for determining the effects of various processing steps, estabIishing the value of individual operations, and evaluating the significance of test results. The elimination of unnecessary or detrimental processing steps was accomplished and the number of required tests was substantially reduced by application of these statistical methods to the SNAP 10A production development effort. (auth)
Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.
Intraseasonal and interannual oscillations in coupled ocean-atmosphere models
NASA Technical Reports Server (NTRS)
Hirst, Anthony C.; Lau, K.-M.
1990-01-01
An investigation is presented of coupled ocean-atmosphere models' behavior in an environment where atmospheric wave speeds are substantially reduced from dry atmospheric values by such processes as condensation-moisture convergence. Modes are calculated for zonally periodic, unbounded ocean-atmosphere systems, emphasizing the importance of an inclusion of prognostic atmosphere equations in simple coupled ocean-atmosphere models with a view to simulations of intraseasonal variability and its possible interaction with interannual variability. The dynamics of low and high frequency modes are compared; both classes are sensitive to the degree to which surface wind anomalies are able to affect the evaporation rate.
Chrestenson transform FPGA embedded factorizations.
Corinthios, Michael J
2016-01-01
Chrestenson generalized Walsh transform factorizations for parallel processing imbedded implementations on field programmable gate arrays are presented. This general base transform, sometimes referred to as the Discrete Chrestenson transform, has received special attention in recent years. In fact, the Discrete Fourier transform and Walsh-Hadamard transform are but special cases of the Chrestenson generalized Walsh transform. Rotations of a base-p hypercube, where p is an arbitrary integer, are shown to produce dynamic contention-free memory allocation, in processor architecture. The approach is illustrated by factorizations involving the processing of matrices of the transform which are function of four variables. Parallel operations are implemented matrix multiplications. Each matrix, of dimension N × N, where N = p (n) , n integer, has a structure that depends on a variable parameter k that denotes the iteration number in the factorization process. The level of parallelism, in the form of M = p (m) processors can be chosen arbitrarily by varying m between zero to its maximum value of n - 1. The result is an equation describing the generalised parallelism factorization as a function of the four variables n, p, k and m. Applications of the approach are shown in relation to configuring field programmable gate arrays for digital signal processing applications.
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...
2016-10-18
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
A method for developing outcome measures in the clinical laboratory.
Jones, J
1996-01-01
Measuring and reporting outcomes in health care is becoming more important for quality assessment, utilization assessment, accreditation standards, and negotiating contracts in managed care. How does one develop an outcome measure for the laboratory to assess the value of the services? A method is described which outlines seven steps in developing outcome measures for a laboratory service or process. These steps include the following: 1. Identify the process or service to be monitored for performance and outcome assessment. 2. If necessary, form an multidisciplinary team of laboratory staff, other department staff, physicians, and pathologists. 3. State the purpose of the test or service including a review of published data for the clinical pathological correlation. 4. Prepare a process cause and effect diagram including steps critical to the outcome. 5. Identify key process variables that contribute to positive or negative outcomes. 6. Identify outcome measures that are not process measures. 7. Develop an operational definition, identify data sources, and collect data. Examples, including a process cause and effect diagram, process variables, and outcome measures, are given using the Therapeutic Drug Monitoring service (TDM). A summary of conclusions and precautions for outcome measurement is then provided.
NASA Astrophysics Data System (ADS)
Zheng, Zhongchao; Seto, Tatsuru; Kim, Sanghong; Kano, Manabu; Fujiwara, Toshiyuki; Mizuta, Masahiko; Hasebe, Shinji
2018-06-01
The Czochralski (CZ) process is the dominant method for manufacturing large cylindrical single-crystal ingots for the electronics industry. Although many models and control methods for the CZ process have been proposed, they were only tested with small equipment and only a few industrial application were reported. In this research, we constructed a first-principle model for controlling industrial CZ processes that produce 300 mm single-crystal silicon ingots. The developed model, which consists of energy, mass balance, hydrodynamic, and geometrical equations, calculates the crystal radius and the crystal growth rate as output variables by using the heater input, the crystal pulling rate, and the crucible rise rate as input variables. To improve accuracy, we modeled the CZ process by considering factors such as changes in the positions of the crucible and the melt level. The model was validated with the operation data from an industrial 300 mm CZ process. We compared the calculated and actual values of the crystal radius and the crystal growth rate, and the results demonstrated that the developed model simulated the industrial process with high accuracy.
Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.
2005-01-01
GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.
Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J
1991-10-20
An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.
Intelligent Tutoring for Programming Tasks: Using Plan Analysis to Generate Better Hints.
1982-03-01
construction and execution of a BASIC proqram that assiqns an integer value to a variable and then prints the value of that integer. - ARTICHOKE : assign...the string " ARTICHOKE " to a string variable, assiqn the value of that variable to a second variable, and print the second variable. -SINOP: qet two...the first five tasks: GREENFLAG, ARTICHOKE , SINOP, NINOP, and TWOS. Because the protocols are very lonq, it was necessary to condense them into a
Modelling the co-evolution of indirect genetic effects and inherited variability.
Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter
2018-03-28
When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of IGEs and variability, as the regression coefficient can respond to selection. Our simulations show that the model results in increased variability of body weight with increasing competition. When competition decreases, i.e., cooperation evolves, variability becomes significantly smaller. Hence, our model facilitates quantitative genetic studies on the relationship between IGEs and inherited variability. Moreover, our findings suggest that we may have been overlooking an entire level of genetic variation in variability, the one due to IGEs.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
Culture-gene coevolution of individualism-collectivism and the serotonin transporter gene.
Chiao, Joan Y; Blizinsky, Katherine D
2010-02-22
Culture-gene coevolutionary theory posits that cultural values have evolved, are adaptive and influence the social and physical environments under which genetic selection operates. Here, we examined the association between cultural values of individualism-collectivism and allelic frequency of the serotonin transporter functional polymorphism (5-HTTLPR) as well as the role this culture-gene association may play in explaining global variability in prevalence of pathogens and affective disorders. We found evidence that collectivistic cultures were significantly more likely to comprise individuals carrying the short (S) allele of the 5-HTTLPR across 29 nations. Results further show that historical pathogen prevalence predicts cultural variability in individualism-collectivism owing to genetic selection of the S allele. Additionally, cultural values and frequency of S allele carriers negatively predict global prevalence of anxiety and mood disorder. Finally, mediation analyses further indicate that increased frequency of S allele carriers predicted decreased anxiety and mood disorder prevalence owing to increased collectivistic cultural values. Taken together, our findings suggest culture-gene coevolution between allelic frequency of 5-HTTLPR and cultural values of individualism-collectivism and support the notion that cultural values buffer genetically susceptible populations from increased prevalence of affective disorders. Implications of the current findings for understanding culture-gene coevolution of human brain and behaviour as well as how this coevolutionary process may contribute to global variation in pathogen prevalence and epidemiology of affective disorders, such as anxiety and depression, are discussed.
Spike-Timing of Orbitofrontal Neurons Is Synchronized With Breathing.
Kőszeghy, Áron; Lasztóczi, Bálint; Forro, Thomas; Klausberger, Thomas
2018-01-01
The orbitofrontal cortex (OFC) has been implicated in a multiplicity of complex brain functions, including representations of expected outcome properties, post-decision confidence, momentary food-reward values, complex flavors and odors. As breathing rhythm has an influence on odor processing at primary olfactory areas, we tested the hypothesis that it may also influence neuronal activity in the OFC, a prefrontal area involved also in higher order processing of odors. We recorded spike timing of orbitofrontal neurons as well as local field potentials (LFPs) in awake, head-fixed mice, together with the breathing rhythm. We observed that a large majority of orbitofrontal neurons showed robust phase-coupling to breathing during immobility and running. The phase coupling of action potentials to breathing was significantly stronger in orbitofrontal neurons compared to cells in the medial prefrontal cortex. The characteristic synchronization of orbitofrontal neurons with breathing might provide a temporal framework for multi-variable processing of olfactory, gustatory and reward-value relationships.
[French norms of imagery for pictures, for concrete and abstract words].
Robin, Frédérique
2006-09-01
This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.
NASA Technical Reports Server (NTRS)
Mckay, C. P.
1985-01-01
To investigate the occurrence of low temperatures and the formation of noctilucent clouds in the summer mesosphere, a one-dimensional time-dependent photochemical-thermal numerical model of the atmosphere between 50 and 120 km has been constructed. The model self-consistently solves the coupled photochemical and thermal equations as perturbation equations from a reference state assumed to be in equilibrium and is used to consider the effect of variability in water vapor in the lower mesosphere on the temperature in the region of noctilucent cloud formation. It is found that change in water vapor from an equilibrium value of 5 ppm at 50 km to a value of 10 ppm, a variation consistent with observations, can produce a roughly 15 K drop in temperature at 82 km. It is suggested that this process may produce weeks of cold temperatures and influence noctilucent cloud formation.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station.
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kudinov, V. A.
2014-09-01
The differential equation of damped string vibrations was obtained with the finite speed of extension and strain propagation in the Hooke's law formula taken into account. In contrast to the well-known equations, the obtained equation contains the first and third time derivatives of the displacement and the mixed derivative with respect to the space and time variables. Separation of variables was used to obtain its exact closed-form solution, whose analysis showed that, for large values of the relaxation coefficient, the string return to the initial state after its escape from equilibrium is accompanied by high-frequency low-amplitude damped vibrations, which occur on the initial time interval only in the region of positive displacements. And in the limit, for some large values of the relaxation coefficient, the string return to the initial state occurs practically without any oscillatory process.
NASA Astrophysics Data System (ADS)
Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.
2017-02-01
Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
White, Rebecca M. B.; Roosa, Mark W.; Zeiders, Katharine H.
2012-01-01
We present an integrated model for understanding Mexican American youth mental health within family, neighborhood, and cultural contexts. We combined two common perspectives on neighborhood effects to hypothesize that (a) parents’ perceptions of neighborhood risk would negatively impact their children’s mental health by disrupting key parenting and family processes, and (b) objective neighborhood risk would alter the effect parent and family processes had on youth mental health. We further incorporated a cultural perspective to hypothesize that an ethnic minority group’s culture-specific values may support parents to successfully confront neighborhood risk. We provided a conservative test of the integrated model by simultaneously examining three parenting and family process variables: maternal warmth, maternal harsh parenting, and family cohesion. The hypothesized model was estimated prospectively in a diverse, community-based sample of Mexican American adolescents and their mothers (N = 749) living in the Southwestern, U.S. Support for specific elements of the hypothesized model varied depending on the parenting or family process variable examined. For family cohesion results were consistent with the combined neighborhood perspectives. The effects of maternal warmth on youth mental health were altered by objective neighborhood risk. For harsh parenting results were somewhat consistent with the cultural perspective. The value of the integrated model for research on the impacts of family, neighborhood, and cultural contexts on youth mental health are discussed, as are implications for preventive interventions for Mexican American families and youth. PMID:22866932
White, Rebecca M B; Roosa, Mark W; Zeiders, Katharine H
2012-10-01
We present an integrated model for understanding Mexican American youth mental health within family, neighborhood, and cultural contexts. We combined two common perspectives on neighborhood effects to hypothesize that (a) parents' perceptions of neighborhood risk would negatively impact their children's mental health by disrupting key parenting and family processes, and (b) objective neighborhood risk would alter the effect parent and family processes had on youth mental health. We further incorporated a cultural perspective to hypothesize that an ethnic minority group's culture-specific values may support parents to successfully confront neighborhood risk. We provided a conservative test of the integrated model by simultaneously examining three parenting and family process variables: maternal warmth, maternal harsh parenting, and family cohesion. The hypothesized model was estimated prospectively in a diverse, community-based sample of Mexican American adolescents and their mothers (N = 749) living in the southwestern United States. Support for specific elements of the hypothesized model varied depending on the parenting or family process variable examined. For family cohesion results were consistent with the combined neighborhood perspectives. The effects of maternal warmth on youth mental health were altered by objective neighborhood risk. For harsh parenting, results were somewhat consistent with the cultural perspective. The value of the integrated model for research on the impacts of family, neighborhood, and cultural contexts on youth mental health are discussed, as are implications for preventive interventions for Mexican American families and youth. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Process envelopes for stabilisation/solidification of contaminated soil using lime-slag blend.
Kogbara, Reginald B; Yi, Yaolin; Al-Tabbaa, Abir
2011-09-01
Stabilisation/solidification (S/S) has emerged as an efficient and cost-effective technology for the treatment of contaminated soils. However, the performance of S/S-treated soils is governed by several intercorrelated variables, which complicates the optimisation of the treatment process design. Therefore, it is desirable to develop process envelopes, which define the range of operating variables that result in acceptable performance. In this work, process envelopes were developed for S/S treatment of contaminated soil with a blend of hydrated lime (hlime) and ground granulated blast furnace slag (GGBS) as the binder (hlime/GGBS = 1:4). A sand contaminated with a mixture of heavy metals and petroleum hydrocarbons was treated with 5%, 10% and 20% binder dosages, at different water contents. The effectiveness of the treatment was assessed using unconfined compressive strength (UCS), permeability, acid neutralisation capacity and contaminant leachability with pH, at set periods. The UCS values obtained after 28 days of treatment were up to ∼800 kPa, which is quite low, and permeability was ∼10(-8) m/s, which is higher than might be required. However, these values might be acceptable in some scenarios. The binder significantly reduced the leachability of cadmium and nickel. With the 20% dosage, both metals met the waste acceptance criteria for inert waste landfill and relevant environmental quality standards. The results show that greater than 20% dosage would be required to achieve a balance of acceptable mechanical and leaching properties. Overall, the process envelopes for different performance criteria depend on the end-use of the treated material.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
NASA Astrophysics Data System (ADS)
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.
ERIC Educational Resources Information Center
St-Onge, Christina; Chamberland, Martine; Lévesque, Annie; Varpio, Lara
2016-01-01
Performance-based assessment (PBA) is a valued assessment approach in medical education, be it in a clerkship, residency, or practice context. Raters are intrinsic to PBA and the increased use of PBA has lead to an increased interest in rater cognition. Although several researchers have tackled factors that may influence the variability in rater…
Zhiyong Cai; Qinglin Wu; Jong N. Lee; Salim Hiziroglu
2004-01-01
The purpose of this study was to investigate mechanical and physical performances of particleboard made from low-value eastern redcedar trees. The properties evaluated included bending strength and stiffness, swelling, surface hardness, and screw holding capacity as a function of processing variables (i.e., density, chip type, and board construction). Two types of...
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed
2014-01-01
In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X1); the flow rate (X2); the catalyst weight (X3); the catalyst loading (X4) and the glycerol-water molar ratio (X5) on the H2 yield (Y1) and the conversion of glycerol to gaseous products (Y2) were explored. Using multiple regression analysis; the experimental results of the H2 yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H2 yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t-test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied. PMID:28788567
NASA Astrophysics Data System (ADS)
Hayashi, Masaki; Farrow, Christopher R.
2014-12-01
Groundwater recharge sets a constraint on aquifer water balance in the context of water management. Historical data on groundwater and other relevant hydrological processes can be used to understand the effects of climatic variability on recharge, but such data sets are rare. The climate of the Canadian prairies is characterized by large inter-annual and inter-decadal variability in precipitation, which provides opportunities to examine the response of groundwater recharge to changes in meteorological conditions. A decadal study was conducted in a small (250 km2) prairie watershed in Alberta, Canada. Relative magnitude of annual recharge, indicated by water-level rise, was significantly correlated with a combination of growing-season precipitation and snowmelt runoff, which drives depression-focussed infiltration of meltwater. Annual precipitation was greater than vapour flux at an experimental site in some years and smaller in other years. On average precipitation minus vapour flux was 10 mm y-1, which was comparable to the magnitude of watershed-scale groundwater recharge estimated from creek baseflow. Average baseflow showed a distinct shift from a low value (4 mm y-1) in 1982-1995 to a high value (15 mm y-1) in 2003-2013, indicating the sensitivity of groundwater recharge to a decadal-scale variability of meteorological conditions.
Climate simulations and projections with a super-parameterized climate model
Stan, Cristiana; Xu, Li
2014-07-01
The mean climate and its variability are analyzed in a suite of numerical experiments with a fully coupled general circulation model in which subgrid-scale moist convection is explicitly represented through embedded 2D cloud-system resolving models. Control simulations forced by the present day, fixed atmospheric carbon dioxide concentration are conducted using two horizontal resolutions and validated against observations and reanalyses. The mean state simulated by the higher resolution configuration has smaller biases. Climate variability also shows some sensitivity to resolution but not as uniform as in the case of mean state. The interannual and seasonal variability are better represented in themore » simulation at lower resolution whereas the subseasonal variability is more accurate in the higher resolution simulation. The equilibrium climate sensitivity of the model is estimated from a simulation forced by an abrupt quadrupling of the atmospheric carbon dioxide concentration. The equilibrium climate sensitivity temperature of the model is 2.77 °C, and this value is slightly smaller than the mean value (3.37 °C) of contemporary models using conventional representation of cloud processes. As a result, the climate change simulation forced by the representative concentration pathway 8.5 scenario projects an increase in the frequency of severe droughts over most of the North America.« less
NASA Astrophysics Data System (ADS)
Baroroh, D. K.; Alfiah, D.
2018-05-01
The electric vehicle is one of the innovations to reduce the pollution of the vehicle. Nevertheless, it still has a problem, especially for disposal stage. In supporting product design and development strategy, which is the idea of sustainable design or problem solving of disposal stage, assessment of modularity architecture from electric vehicle in recovery process needs to be done. This research used Design Structure Matrix (DSM) approach to deciding interaction of components and assessment of modularity architecture using the calculation of value from 3 variables, namely Module Independence (MI), Module Similarity (MS), and Modularity for End of Life Stage (MEOL). The result of this research shows that existing design of electric vehicles has the architectural design which has a high value of modularity for recovery process on disposal stage. Accordingly, so it can be reused and recycled in component level or module without disassembly process to support the product that is environmentally friendly (sustainable design) and able reduce disassembly cost.
NASA Astrophysics Data System (ADS)
Sonam, Sonam; Jain, Vikrant
2017-04-01
River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.
NASA Astrophysics Data System (ADS)
Cantarino, I.; Torrijo, F. J.; Palencia, S.; Gielen, E.
2014-05-01
This paper proposes a method of valuing the stock of residential buildings in Spain as the first step in assessing possible damage caused to them by natural hazards. For the purposes of the study we had access to the SIOSE (the Spanish Land Use and Cover Information System), a high-resolution land-use model, as well as to a report on the financial valuations of this type of buildings throughout Spain. Using dasymetric disaggregation processes and GIS techniques we developed a geolocalized method of obtaining this information, which was the exposure variable in the general risk assessment formula. If hazard maps and risk assessment methods - the other variables - are available, the risk value can easily be obtained. An example of its application is given in a case study that assesses the risk of a landslide in the entire 23 200 km2 of the Valencia Autonomous Community (NUT2), the results of which are analyzed by municipal areas (LAU2) for the years 2005 and 2009.
Linking social change and developmental change: shifting pathways of human development.
Greenfield, Patricia M
2009-03-01
P. M. Greenfield's new theory of social change and human development aims to show how changing sociodemographic ecologies alter cultural values and learning environments and thereby shift developmental pathways. Worldwide sociodemographic trends include movement from rural residence, informal education at home, subsistence economy, and low-technology environments to urban residence, formal schooling, commerce, and high-technology environments. The former ecology is summarized by the German term Gemeinschaft ("community") and the latter by the German term Gesellschaft ("society"; Tönnies, 1887/1957). A review of empirical research demonstrates that, through adaptive processes, movement of any ecological variable in a Gesellschaft direction shifts cultural values in an individualistic direction and developmental pathways toward more independent social behavior and more abstract cognition--to give a few examples of the myriad behaviors that respond to these sociodemographic changes. In contrast, the (much less frequent) movement of any ecological variable in a Gemeinschaft direction is predicted to move cultural values and developmental pathways in the opposite direction. In conclusion, sociocultural environments are not static either in the developed or the developing world and therefore must be treated dynamically in developmental research.
Modelling and Optimising the Value of a Hybrid Solar-Wind System
NASA Astrophysics Data System (ADS)
Nair, Arjun; Murali, Kartik; Anbuudayasankar, S. P.; Arjunan, C. V.
2017-05-01
In this paper, a net present value (NPV) approach for a solar hybrid system has been presented. The system, in question aims at supporting an investor by assessing an investment in solar-wind hybrid system in a given area. The approach follow a combined process of modelling the system, with optimization of major investment-related variables to maximize the financial yield of the investment. The consideration of solar wind hybrid supply presents significant potential for cost reduction. The investment variables concern the location of solar wind plant, and its sizing. The system demand driven, meaning that its primary aim is to fully satisfy the energy demand of the customers. Therefore, the model is a practical tool in the hands of investor to assess and optimize in financial terms an investment aiming at covering real energy demand. Optimization is performed by taking various technical, logical constraints. The relation between the maximum power obtained between individual system and the hybrid system as a whole in par with the net present value of the system has been highlighted.
Development and application of a hillslope hydrologic model
Blain, C.A.; Milly, P.C.D.
1991-01-01
A vertically integrated two-dimensional lateral flow model of soil moisture has been developed. Derivation of the governing equation is based on a physical interpretation of hillslope processes. The lateral subsurface-flow model permits variability of precipitation and evapotranspiration, and allows arbitrary specification of soil-moisture retention properties. Variable slope, soil thickness, and saturation are all accommodated. The numerical solution method, a Crank-Nicolson, finite-difference, upstream-weighted scheme, is simple and robust. A small catchment in northeastern Kansas is the subject of an application of the lateral subsurface-flow model. Calibration of the model using observed discharge provides estimates of the active porosity (0.1 cm3/cm3) and of the saturated horizontal hydraulic conductivity (40 cm/hr). The latter figure is at least an order of magnitude greater than the vertical hydraulic conductivity associated with the silty clay loam soil matrix. The large value of hydraulic conductivity derived from the calibration is suggestive of macropore-dominated hillslope drainage. The corresponding value of active porosity agrees well with a published average value of the difference between total porosity and field capacity for a silty clay loam. ?? 1991.
NASA Astrophysics Data System (ADS)
López-Vicente, M.; Navas, A.
2012-04-01
One important issue in agricultural management and hydrological research is the assessment of water stored during a rainfall event. In this study, the new Distributed Rainfall-Runoff (DR2) model (López-Vicente and Navas, 2012) is used to estimate the volume of actual available water (Waa) and the soil moisture status (SMS) in a set of rain-fed cereal fields (65 ha) located in the Central Spanish Pre-Pyrenees. This model makes the most of GIS techniques (ArcMapTM 10.0) and distinguishes five configurations of the upslope contributing area, infiltration processes and climatic parameters. Results are presented on a monthly basis. The study site has a relatively long history (since the 10th century) of human occupation, agricultural practices and water management. The landscape is representative of the typical former rain-fed Mediterranean agro-ecosystem where small patches of natural and anthropogenic areas are heterogeneously distributed. Climate is continental Mediterranean with a dry summer with rainfall events of high intensity (I30max, higher than 30 mm / h between May and October). Average annual precipitation was 520 mm for the reference period (1961-1990), whereas the average precipitation during the last ten years (2001-2010) was 16% lower (439 mm). Measured antecedent topsoil moisture presented the highest values in autumn (18.3 vol.%) and the lowest in summer (11.2 vol.%). Values of potential overland flow per raster cell (Q0) during maximum rainfall intensity varied notably in terms of time and space. When rainfall intensity is high (May, August, September and October), potential runoff was predicted along the surface of the crops and variability of Q0 was very low, whereas areas with no runoff production appeared when rainfall intensity was low and variability of Q0 values was high. A variance components analysis shows that values of Q0 are mainly explained by variations in the values of saturated hydraulic conductivity (76% of the variability of Q0) and, to a lesser extent, by the values of the antecedent topsoil moisture (23%) and the volumetric content of water of the soil at saturation (1%). Maps of monthly actual available water after maximum rainfall intensity presented a significant spatial variability, though values varied as a function of total rainfall depth and infiltration, and the five different scenarios of cumulative processes considered on the DR2 model. The minimum value of Waa for each month was well correlated with the average values of precipitation (Pearson's r = 0.86), whereas the mean values of Waa showed a close correlation with the values of maximum rainfall intensity (Pearson's r = 0.92). Maps of SMS and their values were reclassified in seven wetness-dryness categories. Predominant wet conditions occurred in May, September, October, November and December, whereas dry conditions appeared in February, March and July. Drying-up conditions were identified in January and June and wetting-up conditions occurred in April and August. The new DR2 model seems to be of interest to monitor humidity variations and trends in time and space in Mediterranean agricultural systems and can provide valuable information for sustainable soil and water resource management in agro-climatic analysis.
Color visualization for fluid flow prediction
NASA Technical Reports Server (NTRS)
Smith, R. E.; Speray, D. E.
1982-01-01
High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
Applications of the generalized information processing system (GIPSY)
Moody, D.W.; Kays, Olaf
1972-01-01
The Generalized Information Processing System (GIPSY) stores and retrieves variable-field, variable-length records consisting of numeric data, textual data, or codes. A particularly noteworthy feature of GIPSY is its ability to search records for words, word stems, prefixes, and suffixes as well as for numeric values. Moreover, retrieved records may be printed on pre-defined formats or formatted as fixed-field, fixed-length records for direct input to other-programs, which facilitates the exchange of data with other systems. At present there are some 22 applications of GIPSY falling in the general areas of bibliography, natural resources information, and management science, This report presents a description of each application including a sample input form, dictionary, and a typical formatted record. It is hoped that these examples will stimulate others to experiment with innovative uses of computer technology.
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
Study on creep of fiber reinforced ultra-high strength concrete based on strength
NASA Astrophysics Data System (ADS)
Peng, Wenjun; Wang, Tao
2018-04-01
To complement the creep performance of ultra-high strength concrete, the long creep process of fiber reinforced concrete was studied in this paper. The long-term creep process and regularity of ultra-high strength concrete with 0.5% PVA fiber under the same axial compression were analyzed by using concrete strength (C80/C100/C120) as a variable. The results show that the creep coefficient of ultra-high strength concrete decreases with the increase of concrete strength. Compared with ACI209R (92), GL2000 models, it is found that the predicted value of ACI209R (92) are close to the experimental value, and the creep prediction model suitable for this experiment is proposed based on ACI209R (92).
NASA Astrophysics Data System (ADS)
Rodriguez-Galiano, Victor; Aragones, David; Caparros-Santiago, Jose A.; Navarro-Cerrillo, Rafael M.
2017-10-01
Land surface phenology (LSP) can improve the characterisation of forest areas and their change processes. The aim of this work was: i) to characterise the temporal dynamics in Mediterranean Pinus forests, and ii) to evaluate the potential of LSP for species discrimination. The different experiments were based on 679 mono-specific plots for the 5 native species on the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster. The entire MODIS NDVI time series (2000-2016) of the MOD13Q1 product was used to characterise phenology. The following phenological parameters were extracted: the start, end and median days of the season, and the length of the season in days, as well as the base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesise the inter-annual variability of the phenological parameters. The species were discriminated by the application of Random Forest (RF) classifiers from different subsets of variables: model 1) NDVI-smoothed time series, model 2) multi-temporal metrics of the phenological parameters, and model 3) multi-temporal metrics and the auxiliary physical variables (altitude, slope, aspect and distance to the coastline). Model 3 was the best, with an overall accuracy of 82%, a kappa coefficient of 0.77 and whose most important variables were: elevation, coast distance, and the end and start days of the growing season. The species that presented the largest errors was P. nigra, (kappa= 0.45), having locations with a similar behaviour to P. sylvestris or P. pinaster.
Thorne, James; Boynton, Ryan; Flint, Lorraine; Flint, Alan; N'goc Le, Thuy
2012-01-01
This paper outlines the production of 270-meter grid-scale maps for 14 climate and derivative hydrologic variables for a region that encompasses the State of California and all the streams that flow into it. The paper describes the Basin Characterization Model (BCM), a map-based, mechanistic model used to process the hydrological variables. Three historic and three future time periods of 30 years (1911–1940, 1941–1970, 1971–2000, 2010–2039, 2040–2069, and 2070–2099) were developed that summarize 180 years of monthly historic and future climate values. These comprise a standardized set of fine-scale climate data that were shared with 14 research groups, including the U.S. National Park Service and several University of California groups as part of this project. We present three analyses done with the outputs from the Basin Characterization Model: trends in hydrologic variables over baseline, the most recent 30-year period; a calibration and validation effort that uses measured discharge values from 139 streamgages and compares those to Basin Characterization Model-derived projections of discharge for the same basins; and an assessment of the trends of specific hydrological variables that links historical trend to projected future change under four future climate projections. Overall, increases in potential evapotranspiration dominate other influences in future hydrologic cycles. Increased potential evapotranspiration drives decreasing runoff even under forecasts with increased precipitation, and drives increased climatic water deficit, which may lead to conversion of dominant vegetation types across large parts of the study region as well as have implications for rain-fed agriculture. The potential evapotranspiration is driven by air temperatures, and the Basin Characterization Model permits it to be integrated with a water balance model that can be derived for landscapes and summarized by watershed. These results show the utility of using a process-based model with modules representing different hydrological pathways that can be inter-linked.
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
NASA Astrophysics Data System (ADS)
Rodrigues, D. A.; Oliveira, A. S. M.; Specht, R. F.; Santana, R. M. C.
2014-05-01
The search for alternative materials that provide reduced costs in manufacturing processes, and the need of to recycle materials normally disposable, it has aroused great interest and much research, with regard to reduction of material consumption due to its high cost and scarcity. Within this focus, this work aims to characterize a thermoplastic composite, whose polymer matrix is polypropylene (PP), and as disperse phase "grinding sludge, GS" from the various machining processes for grinding. After drying and sieving the GS and its subsequent mixing with the thermoplastic resin to prepare the PP/GS composites formulated were 80/20, 70/30, 60/40 w/w. The composite was injected into an injection mold in the form specimen test. The specimens followed the ASTM D638 and ASTM D256 for tensile and impact respectively. Three processing parameters were varied: the content of GS, temperature and injection rate. Each of these variables has three levels: L (low), M (medium) and H (high), making all possible combinations, totaling 27 processing conditions. The experimental conditions followed a statistical design obtained with the software Statgraphics Centurion, where the effects of variables are studied according to their statistical significance. An analysis of MEV and EDS was performed to obtain the characteristics of the "grinding sludge" (geometry and composition). Despite having been sifted, the geometry of the GS was still very rough, with varied shapes and sizes, and even made up a small percentage of abrasive grains. The variable that most influenced the mechanical properties was the content of particulate GS. The values obtained for the maximum tensile strength not behaved in descending order as expected, this may be the effect of small amount of samples tested. The results of the mechanical properties showed that for the elasticity modulus increased with increasing of GS; the values of elongation and impact strength decreased with increasing of GS.
The Bilinear Product Model of Hysteresis Phenomena
NASA Astrophysics Data System (ADS)
Kádár, György
1989-01-01
In ferromagnetic materials non-reversible magnetization processes are represented by rather complex hysteresis curves. The phenomenological description of such curves needs the use of multi-valued, yet unambiguous, deterministic functions. The history dependent calculation of consecutive Everett-integrals of the two-variable Preisach-function can account for the main features of hysteresis curves in uniaxial magnetic materials. The traditional Preisach model has recently been modified on the basis of population dynamics considerations, removing the non-real congruency property of the model. The Preisach-function was proposed to be a product of two factors of distinct physical significance: a magnetization dependent function taking into account the overall magnetization state of the body and a bilinear form of a single variable, magnetic field dependent, switching probability function. The most important statement of the bilinear product model is, that the switching process of individual particles is to be separated from the book-keeping procedure of their states. This empirical model of hysteresis can easily be extended to other irreversible physical processes, such as first order phase transitions.
Modeling rainfall-runoff process using soft computing techniques
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa
2013-02-01
Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.
Eum, Hyung-Il; Gachon, Philippe; Laprise, René
2016-01-01
This study examined the impact of model biases on climate change signals for daily precipitation and for minimum and maximum temperatures. Through the use of multiple climate scenarios from 12 regional climate model simulations, the ensemble mean, and three synthetic simulations generated by a weighting procedure, we investigated intermodel seasonal climate change signals between current and future periods, for both median and extreme precipitation/temperature values. A significant dependence of seasonal climate change signals on the model biases over southern Québec in Canada was detected for temperatures, but not for precipitation. This suggests that the regional temperature change signal is affectedmore » by local processes. Seasonally, model bias affects future mean and extreme values in winter and summer. In addition, potentially large increases in future extremes of temperature and precipitation values were projected. For three synthetic scenarios, systematically less bias and a narrow range of mean change for all variables were projected compared to those of climate model simulations. In addition, synthetic scenarios were found to better capture the spatial variability of extreme cold temperatures than the ensemble mean scenario. Finally, these results indicate that the synthetic scenarios have greater potential to reduce the uncertainty of future climate projections and capture the spatial variability of extreme climate events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eum, Hyung-Il; Gachon, Philippe; Laprise, René
This study examined the impact of model biases on climate change signals for daily precipitation and for minimum and maximum temperatures. Through the use of multiple climate scenarios from 12 regional climate model simulations, the ensemble mean, and three synthetic simulations generated by a weighting procedure, we investigated intermodel seasonal climate change signals between current and future periods, for both median and extreme precipitation/temperature values. A significant dependence of seasonal climate change signals on the model biases over southern Québec in Canada was detected for temperatures, but not for precipitation. This suggests that the regional temperature change signal is affectedmore » by local processes. Seasonally, model bias affects future mean and extreme values in winter and summer. In addition, potentially large increases in future extremes of temperature and precipitation values were projected. For three synthetic scenarios, systematically less bias and a narrow range of mean change for all variables were projected compared to those of climate model simulations. In addition, synthetic scenarios were found to better capture the spatial variability of extreme cold temperatures than the ensemble mean scenario. Finally, these results indicate that the synthetic scenarios have greater potential to reduce the uncertainty of future climate projections and capture the spatial variability of extreme climate events.« less
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Ostrich specific semen diluent and sperm motility characteristics during in vitro storage.
Smith, A M J; Bonato, M; Dzama, K; Malecki, I A; Cloete, S W P
2018-06-01
The dilution of semen is a very important initial process for semen processing and evaluation, storage and preservation in vitro and efficient artificial insemination. The aim of the study was to evaluate the effect of two synthetic diluents (OS1 and OS2) on ostrich sperm motility parameters during in vitro storage. Formulation of OS1 was based on macro minerals (Na, K, P, Ca, Mg) and OS2 on the further addition of micro minerals (Se and Zn), based on mineral concentration determined in the ostrich seminal plasma (SP). Sperm motility was evaluated at different processing stages (neat, after dilution, during storage and after storage) by measuring several sperm motility variables using the Sperm Class Analyzer® (SCA). Processing (dilution, cooling and storage) of semen for in vitro storage purposes decreased the values for all sperm motility variables measured. The percentage motile (MOT) and progressive motile (PMOT) sperm decreased 20% to 30% during 24 h of storage, independent of diluent type. Quality of sperm swim (LIN, STR and WOB), however, was sustained during the longer storage periods (48 h) with the OS2 diluent modified with Se and Zn additions. Quality of sperm swim with use of OS1 was 6% to 8% less for the LIN, STR, and WOB variables. Male fitted as a fixed effect accounted for >60% of the variation for certain sperm motility variables (PMOT, MOT, VCL, VSL, VAP and ALH) evaluated at different processing stages. Semen from specific males had sustained sperm motility characteristics to a greater extent than that of other males during the 24-h storage period. Copyright © 2018 Elsevier B.V. All rights reserved.
The closure problem for turbulence in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1985-01-01
The dependent variables used for computer based meteorological predictions and in plans for oceanographic predictions are wave number and frequency filtered values that retain only scales resolvable by the model. Scales unresolvable by the grid in use become 'turbulence'. Whether or not properly processed data are used for initial values is important, especially for sparce data. Fickian diffusion with a constant eddy diffusion is used as a closure for many of the present models. A physically realistic closure based on more modern turbulence concepts, especially one with a reverse cascade at the right times and places, could help improve predictions.
An Extension to the Kalman Filter for an Improved Detection of Unknown Behavior
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Narasimhan, Sriram
2005-01-01
The use of Kalman filter (KF) interferes with fault detection algorithms based on the residual between estimated and measured variables, since the measured values are used to update the estimates. This feedback results in the estimates being pulled closer to the measured values, influencing the residuals in the process. Here we present a fault detection scheme for systems that are being tracked by a KF. Our approach combines an open-loop prediction over an adaptive window and an information-based measure of the deviation of the Kalman estimate from the prediction to improve fault detection.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amoroso, J.; Peeler, D.; Edwards, T.
2012-05-11
A recommendation to eliminate all characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification sample was made by a Six-Sigma team chartered to eliminate non-value-added activities for the Defense Waste Processing Facility (DWPF) sludge batch qualification program and is documented in the report SS-PIP-2006-00030. That recommendation was supported through a technical data review by the Savannah River National Laboratory (SRNL) and is documented in the memorandums SRNL-PSE-2007-00079 and SRNL-PSE-2007-00080. At the time of writing those memorandums, the DWPF was processing sludge-only waste but, has since transitioned to a coupledmore » operation (sludge and salt). The SRNL was recently tasked to perform a similar data review relevant to coupled operations and re-evaluate the previous recommendations. This report evaluates the validity of eliminating the characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification samples based on sludge-only and coupled operations. The pour stream sample has confirmed the DWPF's ability to produce an acceptable waste form from Slurry Mix Evaporator (SME) blending and product composition/durability predictions for the previous sixteen years but, ultimately the pour stream analysis has added minimal value to the DWPF's waste qualification strategy. Similarly, the information gained from the glass fabrication and PCT of the sludge batch qualification sample was determined to add minimal value to the waste qualification strategy since that sample is routinely not representative of the waste composition ultimately processed at the DWPF due to blending and salt processing considerations. Moreover, the qualification process has repeatedly confirmed minimal differences in glass behavior from actual radioactive waste to glasses fabricated from simulants or batch chemicals. In contrast, the variability study has significantly added value to the DWPF's qualification strategy. The variability study has evolved to become the primary aspect of the DWPF's compliance strategy as it has been shown to be versatile and capable of adapting to the DWPF's various and diverse waste streams and blending strategies. The variability study, which aims to ensure durability requirements and the PCT and chemical composition correlations are valid for the compositional region to be processed at the DWPF, must continue to be performed. Due to the importance of the variability study and its place in the DWPF's qualification strategy, it will also be discussed in this report. An analysis of historical data and Production Records indicated that the recommendation of the Six Sigma team to eliminate all characterization of pour stream glass samples and the glass fabrication and PCT performed with the qualification glass does not compromise the DWPF's current compliance plan. Furthermore, the DWPF should continue to produce an acceptable waste form following the remaining elements of the Glass Product Control Program; regardless of a sludge-only or coupled operations strategy. If the DWPF does decide to eliminate the characterization of pour stream samples, pour stream samples should continue to be collected for archival reasons, which would allow testing to be performed should any issues arise or new repository test methods be developed.« less
A Sensory Material Approach for Reducing Variability in Additively Manufactured Metal Parts.
Franco, B E; Ma, J; Loveall, B; Tapia, G A; Karayagiz, K; Liu, J; Elwany, A; Arroyave, R; Karaman, I
2017-06-15
Despite the recent growth in interest for metal additive manufacturing (AM) in the biomedical and aerospace industries, variability in the performance, composition, and microstructure of AM parts remains a major impediment to its widespread adoption. The underlying physical mechanisms, which cause variability, as well as the scale and nature of variability are not well understood, and current methods are ineffective at capturing these details. Here, a Nickel-Titanium alloy is used as a sensory material in order to quantitatively, and rather rapidly, observe compositional and/or microstructural variability in selective laser melting manufactured parts; thereby providing a means to evaluate the role of process parameters on the variability. We perform detailed microstructural investigations using transmission electron microscopy at various locations to reveal the origins of microstructural variability in this sensory material. This approach helped reveal how reducing the distance between adjacent laser scans below a critical value greatly reduces both the in-sample and sample-to-sample variability. Microstructural investigations revealed that when the laser scan distance is wide, there is an inhomogeneity in subgrain size, precipitate distribution, and dislocation density in the microstructure, responsible for the observed variability. These results provide an important first step towards understanding the nature of variability in additively manufactured parts.
Nagengast, Benjamin; Trautwein, Ulrich; Kelava, Augustin; Lüdtke, Oliver
2013-05-01
Historically, expectancy-value models of motivation assumed a synergistic relation between expectancy and value: motivation is high only when both expectancy and value are high. Motivational processes were studied from a within-person perspective, with expectancies and values being assessed or experimentally manipulated across multiple domains and the focus being placed on intraindividual differences. In contrast, contemporary expectancy-value models in educational psychology concentrate almost exclusively on linear effects of expectancy and value on motivational outcomes, with a focus on between-person differences. Recent advances in latent variable methodology allow both issues to be addressed in observational studies. Using the expectancy-value model of homework motivation as a theoretical framework, this study estimated multilevel structural equation models with latent interactions in a sample of 511 secondary school students and found synergistic effects between domain-specific homework expectancy and homework value in predicting homework engagement in 6 subjects. This approach not only brings the "×" back into expectancy-value theory but also reestablishes the within-person perspective as the appropriate level of analysis for latent expectancy-value models.
Reinforcement Learning Based Web Service Compositions for Mobile Business
NASA Astrophysics Data System (ADS)
Zhou, Juan; Chen, Shouming
In this paper, we propose a new solution to Reactive Web Service Composition, via molding with Reinforcement Learning, and introducing modified (alterable) QoS variables into the model as elements in the Markov Decision Process tuple. Moreover, we give an example of Reactive-WSC-based mobile banking, to demonstrate the intrinsic capability of the solution in question of obtaining the optimized service composition, characterized by (alterable) target QoS variable sets with optimized values. Consequently, we come to the conclusion that the solution has decent potentials in boosting customer experiences and qualities of services in Web Services, and those in applications in the whole electronic commerce and business sector.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Folguera, Guillermo; Bastías, Daniel A; Caers, Jelle; Rojas, José M; Piulachs, Maria-Dolors; Bellés, Xavier; Bozinovic, Francisco
2011-07-01
Global climate change is one of the greatest threats to biodiversity; one of the most important effects is the increase in the mean earth surface temperature. However, another but poorly studied main characteristic of global change appears to be an increase in temperature variability. Most of the current analyses of global change have focused on mean values, paying less attention to the role of the fluctuations of environmental variables. We experimentally tested the effects of environmental temperature variability on characteristics associated to the fitness (body mass balance, growth rate, and survival), metabolic rate (VCO(2)) and molecular traits (heat shock protein expression, Hsp70), in an ectotherm, the terrestrial woodlouse Porcellio laevis. Our general hypotheses are that higher values of thermal amplitude may directly affect life-history traits, increasing metabolic cost and stress responses. At first, results supported our hypotheses showing a diversity of responses among characters to the experimental thermal treatments. We emphasize that knowledge about the cellular and physiological mechanisms by which animals cope with environmental changes is essential to understand the impact of mean climatic change and variability. Also, we consider that the studies that only incorporate only mean temperatures to predict the life-history, ecological and evolutionary impact of global temperature changes present important problems to predict the diversity of responses of the organism. This is because the analysis ignores the complexity and details of the molecular and physiological processes by which animals cope with environmental variability, as well as the life-history and demographic consequences of such variability. Copyright © 2011 Elsevier Inc. All rights reserved.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
Spatial variability of specific surface area of arable soils in Poland
NASA Astrophysics Data System (ADS)
Sokolowski, S.; Sokolowska, Z.; Usowicz, B.
2012-04-01
Evaluation of soil spatial variability is an important issue in agrophysics and in environmental research. Knowledge of spatial variability of physico-chemical properties enables a better understanding of several processes that take place in soils. In particular, it is well known that mineralogical, organic, as well as particle-size compositions of soils vary in a wide range. Specific surface area of soils is one of the most significant characteristics of soils. It can be not only related to the type of soil, mainly to the content of clay, but also largely determines several physical and chemical properties of soils and is often used as a controlling factor in numerous biological processes. Knowledge of the specific surface area is necessary in calculating certain basic soil characteristics, such as the dielectric permeability of soil, water retention curve, water transport in the soil, cation exchange capacity and pesticide adsorption. The aim of the present study is two-fold. First, we carry out recognition of soil total specific surface area patterns in the territory of Poland and perform the investigation of features of its spatial variability. Next, semivariograms and fractal analysis are used to characterize and compare the spatial variability of soil specific surface area in two soil horizons (A and B). Specific surface area of about 1000 samples was determined by analyzing water vapor adsorption isotherms via the BET method. The collected data of the values of specific surface area of mineral soil representatives for the territory of Poland were then used to describe its spatial variability by employing geostatistical techniques and fractal theory. Using the data calculated for some selected points within the entire territory and along selected directions, the values of semivariance were determined. The slope of the regression line of the log-log plot of semi-variance versus the distance was used to estimate the fractal dimension, D. Specific surface area in A and B horizons was space-dependent, with the range of spatial dependence of about 2.5°. Variogram surfaces showed anisotropy of the specific surface area in both horizons with a trend toward the W to E directions. The smallest fractal dimensions were obtained for W to E directions and the highest values - for S to N directions. * The work was financially supported in part by the ESA Programme for European Cooperating States (PECS), No.98084 "SWEX-R, Soil Water and Energy Exchange/Research", AO3275.
Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar
2015-09-01
In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.
Application of classification-tree methods to identify nitrate sources in ground water
Spruill, T.B.; Showers, W.J.; Howe, S.S.
2002-01-01
A study was conducted to determine if nitrate sources in ground water (fertilizer on crops, fertilizer on golf courses, irrigation spray from hog (Sus scrofa) wastes, and leachate from poultry litter and septic systems) could be classified with 80% or greater success. Two statistical classification-tree models were devised from 48 water samples containing nitrate from five source categories. Model I was constructed by evaluating 32 variables and selecting four primary predictor variables (??15N, nitrate to ammonia ratio, sodium to potassium ratio, and zinc) to identify nitrate sources. A ??15N value of nitrate plus potassium 18.2 indicated inorganic or soil organic N. A nitrate to ammonia ratio 575 indicated nitrate from golf courses. A sodium to potassium ratio 3.2 indicated spray or poultry wastes. A value for zinc 2.8 indicated poultry wastes. Model 2 was devised by using all variables except ??15N. This model also included four variables (sodium plus potassium, nitrate to ammonia ratio, calcium to magnesium ratio, and sodium to potassium ratio) to distinguish categories. Both models were able to distinguish all five source categories with better than 80% overall success and with 71 to 100% success in individual categories using the learning samples. Seventeen water samples that were not used in model development were tested using Model 2 for three categories, and all were correctly classified. Classification-tree models show great potential in identifying sources of contamination and variables important in the source-identification process.
Optimization of porthole die geometrical variables by Taguchi method
NASA Astrophysics Data System (ADS)
Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.
2017-10-01
Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.
El-Naggar, Noura El-Ahmady; El-Shweihy, Nancy M; El-Ewasy, Sara M
2016-09-20
Due to broad range of clinical and industrial applications of cholesterol oxidase, isolation and screening of bacterial strains producing extracellular form of cholesterol oxidase is of great importance. One hundred and thirty actinomycete isolates were screened for their cholesterol oxidase activity. Among them, a potential culture, strain NEAE-42 is displayed the highest extracellular cholesterol oxidase activity. It was selected and identified as Streptomyces cavourensis strain NEAE-42. The optimization of different process parameters for cholesterol oxidase production by Streptomyces cavourensis strain NEAE-42 using Plackett-Burman experimental design and response surface methodology was carried out. Fifteen variables were screened using Plackett-Burman experimental design. Cholesterol, initial pH and (NH4)2SO4 were the most significant positive independent variables affecting cholesterol oxidase production. Central composite design was chosen to elucidate the optimal concentrations of the selected process variables on cholesterol oxidase production. It was found that, cholesterol oxidase production by Streptomyces cavourensis strain NEAE-42 after optimization process was 20.521U/mL which is higher than result obtained from the basal medium before screening process using Plackett-Burman (3.31 U/mL) with a fold of increase 6.19. The cholesterol oxidase level production obtained in this study (20.521U/mL) by the statistical method is higher than many of the reported values.
Harwell, Mark A.; Gentile, John H.; Cummins, Kenneth W.; Highsmith, Raymond C.; Hilborn, Ray; McRoy, C. Peter; Parrish, Julia; Weingartner, Thomas
2010-01-01
Prince William Sound (PWS) is a semi-enclosed fjord estuary on the coast of Alaska adjoining the northern Gulf of Alaska (GOA). PWS is highly productive and diverse, with primary productivity strongly coupled to nutrient dynamics driven by variability in the climate and oceanography of the GOA and North Pacific Ocean. The pelagic and nearshore primary productivity supports a complex and diverse trophic structure, including large populations of forage and large fish that support many species of marine birds and mammals. High intra-annual, inter-annual, and interdecadal variability in climatic and oceanographic processes as drives high variability in the biological populations. A risk-based conceptual ecosystem model (CEM) is presented describing the natural processes, anthropogenic drivers, and resultant stressors that affect PWS, including stressors caused by the Great Alaska Earthquake of 1964 and the Exxon Valdez oil spill of 1989. A trophodynamic model incorporating PWS valued ecosystem components is integrated into the CEM. By representing the relative strengths of driver/stressors/effects, the CEM graphically demonstrates the fundamental dynamics of the PWS ecosystem, the natural forces that control the ecological condition of the Sound, and the relative contribution of natural processes and human activities to the health of the ecosystem. The CEM illustrates the dominance of natural processes in shaping the structure and functioning of the GOA and PWS ecosystems. PMID:20862192
NASA Astrophysics Data System (ADS)
Rajagukguk, J. R.
2018-01-01
Plastic has become an important component in modern life today. Its role has replaced wood and metal, given its advantages such as light and strong, corrosion resistant, transparent and easy to color and good insulation properties. The research method is used with quantitative and engineering research methods. Research objective is to convert plastic waste into something more economical and to preserve the environment surrounding. Renewable fuel and lubricant variables are simultaneously influenced significantly to the sustainable environment. This is based on Fh> Ft of 62.101> 4.737) and its significance is 0.000 < 0.05. Then Ho concluded rejected Ha accepted which means that the variable of renewable fuels and lubricants or very large effect on the environment sustainable, the value of correlation coefficient 0.941 or 94.1% which means there is a very strong relationship between renewable fuel variables and lubricants to the sustainable environment. And utilizing plastic waste after being processed by pyrolysis method produces liquid hydrocarbons having elements of compounds such as crude oil and renewable fuels obtained from calculations are CO2 + H2O + C1-C4 + Residual substances. Then the plastic waste can be processed by isomerization process + catalyst to lubricating oil and the result of chemical calculation obtained is CO2, H2O, C18H21 and the rest.
Harwell, Mark A; Gentile, John H; Cummins, Kenneth W; Highsmith, Raymond C; Hilborn, Ray; McRoy, C Peter; Parrish, Julia; Weingartner, Thomas
2010-07-01
Prince William Sound (PWS) is a semi-enclosed fjord estuary on the coast of Alaska adjoining the northern Gulf of Alaska (GOA). PWS is highly productive and diverse, with primary productivity strongly coupled to nutrient dynamics driven by variability in the climate and oceanography of the GOA and North Pacific Ocean. The pelagic and nearshore primary productivity supports a complex and diverse trophic structure, including large populations of forage and large fish that support many species of marine birds and mammals. High intra-annual, inter-annual, and interdecadal variability in climatic and oceanographic processes as drives high variability in the biological populations. A risk-based conceptual ecosystem model (CEM) is presented describing the natural processes, anthropogenic drivers, and resultant stressors that affect PWS, including stressors caused by the Great Alaska Earthquake of 1964 and the Exxon Valdez oil spill of 1989. A trophodynamic model incorporating PWS valued ecosystem components is integrated into the CEM. By representing the relative strengths of driver/stressors/effects, the CEM graphically demonstrates the fundamental dynamics of the PWS ecosystem, the natural forces that control the ecological condition of the Sound, and the relative contribution of natural processes and human activities to the health of the ecosystem. The CEM illustrates the dominance of natural processes in shaping the structure and functioning of the GOA and PWS ecosystems.
Enhancement of real-time EPICS IOC PV management for the data archiving system
NASA Astrophysics Data System (ADS)
Kim, Jae-Ha
2015-10-01
The operation of a 100-MeV linear proton accelerator, the major driving values and experimental data need to be archived. According to the experimental conditions, different data are required. Functions that can add new data and delete data in real time need to be implemented. In an experimental physics and industrial control system (EPICS) input output controller (IOC), the value of process variables (PVs) are matched with the driving values and data. The PV values are archived in text file format by using the channel archiver. There is no need to create a database (DB) server, just a need for large hard disk. Through the web, the archived data can be loaded, and new PV values can be archived without stopping the archive engine. The details of the implementation of a data archiving system with channel archiver are presented, and some preliminary results are reported.
The Conforming Brain and Deontological Resolve
Pincus, Melanie; LaViers, Lisa; Prietula, Michael J.; Berns, Gregory
2014-01-01
Our personal values are subject to forces of social influence. Deontological resolve captures how strongly one relies on absolute rules of right and wrong in the representation of one's personal values and may predict willingness to modify one's values in the presence of social influence. Using fMRI, we found that a neurobiological metric for deontological resolve based on relative activity in the ventrolateral prefrontal cortex (VLPFC) during the passive processing of sacred values predicted individual differences in conformity. Individuals with stronger deontological resolve, as measured by greater VLPFC activity, displayed lower levels of conformity. We also tested whether responsiveness to social reward, as measured by ventral striatal activity during social feedback, predicted variability in conformist behavior across individuals but found no significant relationship. From these results we conclude that unwillingness to conform to others' values is associated with a strong neurobiological representation of social rules. PMID:25170989
Zhang, Yan; Hawk, Skyler T.; Zhang, Xiaohui; Zhao, Hongyu
2016-01-01
Professional identity is a key issue spanning the entirety of teachers’ career development. Despite the abundance of existing research examining professional identity, its link with occupation-related behavior at the primary career stage (i.e., GPA in preservice education) and the potential process that underlies this association is still not fully understood. This study explored the professional identity of Chinese preservice teachers, and its links with task value belief, intrinsic learning motivation, extrinsic learning motivation, and performance in the education program. Grade-point average (GPA) of courses (both subject and pedagogy courses) was examined as an indicator of performance, and questionnaires were used to measure the remaining variables. Data from 606 preservice teachers in the first 3 years of a teacher-training program indicated that: (1) variables in this research were all significantly correlated with each other, except the correlation between intrinsic learning motivation and program performance; (2) professional identity was positively linked to task value belief, intrinsic and extrinsic learning motivations, and program performance in a structural equation model (SEM); (3) task value belief was positively linked to intrinsic and extrinsic learning motivation; (4) higher extrinsic (but not intrinsic) learning motivation was associated with increased program performance; and (5) task value belief and extrinsic learning motivation were significant mediators in the model. PMID:27199810
Zhang, Yan; Hawk, Skyler T; Zhang, Xiaohui; Zhao, Hongyu
2016-01-01
Professional identity is a key issue spanning the entirety of teachers' career development. Despite the abundance of existing research examining professional identity, its link with occupation-related behavior at the primary career stage (i.e., GPA in preservice education) and the potential process that underlies this association is still not fully understood. This study explored the professional identity of Chinese preservice teachers, and its links with task value belief, intrinsic learning motivation, extrinsic learning motivation, and performance in the education program. Grade-point average (GPA) of courses (both subject and pedagogy courses) was examined as an indicator of performance, and questionnaires were used to measure the remaining variables. Data from 606 preservice teachers in the first 3 years of a teacher-training program indicated that: (1) variables in this research were all significantly correlated with each other, except the correlation between intrinsic learning motivation and program performance; (2) professional identity was positively linked to task value belief, intrinsic and extrinsic learning motivations, and program performance in a structural equation model (SEM); (3) task value belief was positively linked to intrinsic and extrinsic learning motivation; (4) higher extrinsic (but not intrinsic) learning motivation was associated with increased program performance; and (5) task value belief and extrinsic learning motivation were significant mediators in the model.
Co-extrusion of food grains-banana pulp for nutritious snacks: optimization of process variables.
Mridula, D; Sethi, Swati; Tushir, Surya; Bhadwal, Sheetal; Gupta, R K; Nanda, S K
2017-08-01
Present study was undertaken to optimize the process conditions for development of food grains (maize, defatted soy flour, sesame seed)-banana based nutritious expanded snacks using extrusion processing. Experiments were designed using Box-Behnken design with banana pulp (8-24 g), screw speed (300-350 rpm) and feed moisture (14-16% w.b.). Seven responses viz. expansion ratio (ER), bulk density (BD), water absorption index (WAI), protein, minerals, iron and sensory acceptability were considered for optimizing independent parameters. ER, BD, WAI, protein content, total minerals, iron content, and overall acceptability ranged 2.69-3.36, 153.43-238.83 kg/m 3 , 4.56-4.88 g/g, 15.19-15.52%, 2.06-2.27%, 4.39-4.67 mg/100 g (w.b.) and 6.76-7.36, respectively. ER was significantly affected by all three process variables while BD was influenced by banana pulp and screw speed only. Studied process variables did not affected colour quality except 'a' value with banana pulp and screw speed. Banana pulp had positive correlation with water solubility index, total minerals and iron content and negative with WAI, protein and overall acceptability. Based upon multiple response analysis, optimized conditions were 8 g banana pulp, 350 rpm screw speed and 14% feed moisture indicating the protein, calorie, iron content and overall sensory acceptability in sample as 15.46%, 401 kcal/100 g, 4.48 mg/100 g and 7.6 respectively.
GRCop-84 Rolling Parameter Study
NASA Technical Reports Server (NTRS)
Loewenthal, William S.; Ellis, David L.
2008-01-01
This report is a section of the final report on the GRCop-84 task of the Constellation Program and incorporates the results obtained between October 2000 and September 2005, when the program ended. NASA Glenn Research Center (GRC) has developed a new copper alloy, GRCop-84 (Cu-8 at.% Cr-4 at.% Nb), for rocket engine main combustion chamber components that will improve rocket engine life and performance. This work examines the sensitivity of GRCop-84 mechanical properties to rolling parameters as a means to better define rolling parameters for commercial warm rolling. Experiment variables studied were total reduction, rolling temperature, rolling speed, and post rolling annealing heat treatment. The responses were tensile properties measured at 23 and 500 C, hardness, and creep at three stress-temperature combinations. Understanding these relationships will better define boundaries for a robust commercial warm rolling process. The four processing parameters were varied within limits consistent with typical commercial production processes. Testing revealed that the rolling-related variables selected have a minimal influence on tensile, hardness, and creep properties over the range of values tested. Annealing had the expected result of lowering room temperature hardness and strength while increasing room temperature elongations with 600 C (1112 F) having the most effect. These results indicate that the process conditions to warm roll plate and sheet for these variables can range over wide levels without negatively impacting mechanical properties. Incorporating broader process ranges in future rolling campaigns should lower commercial rolling costs through increased productivity.
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Kumar, Raushan; Xavier, Ka Martin; Lekshmi, Manjusha; Dhanabalan, Vignaesh; Thachil, Madonna T; Balange, Amjad K; Gudipati, Venkateshwarlu
2018-04-01
Functional extruded snacks were prepared using paste shrimp powder (Acetes spp.), which is rich in protein. The process variables required for the preparation of extruded snacks was optimized using response surface methodology. Extrusion temperature (130-144 °C), level of Acetes powder (100-200 g kg -1 ) and feed moisture (140-200 g kg -1 ) were selected as design variables, and expansion ratio, porosity, hardness, crispness and thiobarbituric acid reactive substance value were taken as the response variables. Extrusion temperature significantly influenced all the response variables, while Acetes inclusion influenced all variables except porosity. Feed moisture content showed a significant quadratic effect on all responses and an interactive effect on expansion ratio and hardness. Shrimp powder incorporation increased the protein and mineral content of the final product. The extruded snack made with the combination of extrusion temperature 144.59 °C, feed moisture 178.5 g kg -1 and Acetes inclusion level 146.7 g kg -1 was found to be the best one based on sensory evaluation. The study suggests that use of Acetes species for the development of extruded snacks will serve as a means of utilization of Acetes as well as being a rich source of proteins for human consumption, which would otherwise remain unexploited as a by-catch. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Demirçivi, Pelin; Saygılı, Gülhayat Nasün
2017-07-01
In this study, a different method was applied for boron removal by using vermiculite as the adsorbent. Vermiculite, which was used in the experiments, was not modified with adsorption agents before boron adsorption using a separate process. Hexadecyltrimethylammonium bromide (HDTMA) and Gallic acid (GA) were used as adsorption agents for vermiculite by maintaining the solid/liquid ratio at 12.5 g/L. HDTMA/GA concentration, contact time, pH, initial boron concentration, inert electrolyte and temperature effects on boron adsorption were analyzed. A three-factor, three-level Box-Behnken design model combined with response surface method (RSM) was employed to examine and optimize process variables for boron adsorption from aqueous solution by vermiculite using HDTMA and GA. Solution pH (2-12), temperature (25-60 °C) and initial boron concentration (50-8,000 mg/L) were chosen as independent variables and coded x 1 , x 2 and x 3 at three levels (-1, 0 and 1). Analysis of variance was used to test the significance of variables and their interactions with 95% confidence limit (α = 0.05). According to the regression coefficients, a second-order empirical equation was evaluated between the adsorption capacity (q i ) and the coded variables tested (x i ). Optimum values of the variables were also evaluated for maximum boron adsorption by vermiculite-HDTMA (HDTMA-Verm) and vermiculite-GA (GA-Verm).
Reversed Priming Effects May Be Driven by Misperception Rather than Subliminal Processing.
Sand, Anders
2016-01-01
A new paradigm for investigating whether a cognitive process is independent of perception was recently suggested. In the paradigm, primes are shown at an intermediate signal strength that leads to trial-to-trial and inter-individual variability in prime perception. Here, I used this paradigm and an objective measure of perception to assess the influence of prime identification responses on Stroop priming. I found that sensory states producing correct and incorrect prime identification responses were also associated with qualitatively different priming effects. Incorrect prime identification responses were associated with reversed priming effects but in contrast to previous studies, I interpret this to result from the (mis-)perception of primes rather than from a subliminal process. Furthermore, the intermediate signal strength also produced inter-individual variability in prime perception that strongly influenced priming effects: only participants who on average perceived the primes were Stroop primed. I discuss how this new paradigm, with a wide range of d' values, is more appropriate when regression analysis on inter-individual identification performance is used to investigate perception-dependent processing. The results of this study, in line with previous results, suggest that drawing conclusions about subliminal processes based on data averaged over individuals may be unwarranted.
Reversed Priming Effects May Be Driven by Misperception Rather than Subliminal Processing
Sand, Anders
2016-01-01
A new paradigm for investigating whether a cognitive process is independent of perception was recently suggested. In the paradigm, primes are shown at an intermediate signal strength that leads to trial-to-trial and inter-individual variability in prime perception. Here, I used this paradigm and an objective measure of perception to assess the influence of prime identification responses on Stroop priming. I found that sensory states producing correct and incorrect prime identification responses were also associated with qualitatively different priming effects. Incorrect prime identification responses were associated with reversed priming effects but in contrast to previous studies, I interpret this to result from the (mis-)perception of primes rather than from a subliminal process. Furthermore, the intermediate signal strength also produced inter-individual variability in prime perception that strongly influenced priming effects: only participants who on average perceived the primes were Stroop primed. I discuss how this new paradigm, with a wide range of d′ values, is more appropriate when regression analysis on inter-individual identification performance is used to investigate perception-dependent processing. The results of this study, in line with previous results, suggest that drawing conclusions about subliminal processes based on data averaged over individuals may be unwarranted. PMID:26925016
How Robust Is Linear Regression with Dummy Variables?
ERIC Educational Resources Information Center
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Luzanov, A V
2008-09-07
The Wigner function for the pure quantum states is used as an integral kernel of the non-Hermitian operator K, to which the standard singular value decomposition (SVD) is applied. It provides a set of the squared singular values treated as probabilities of the individual phase-space processes, the latter being described by eigenfunctions of KK(+) (for coordinate variables) and K(+)K (for momentum variables). Such a SVD representation is employed to obviate the well-known difficulties in the definition of the phase-space entropy measures in terms of the Wigner function that usually allows negative values. In particular, the new measures of nonclassicality are constructed in the form that automatically satisfies additivity for systems composed of noninteracting parts. Furthermore, the emphasis is given on the geometrical interpretation of the full entropy measure as the effective phase-space volume in the Wigner picture of quantum mechanics. The approach is exemplified by considering some generic vibrational systems. Specifically, for eigenstates of the harmonic oscillator and a superposition of coherent states, the singular value spectrum is evaluated analytically. Numerical computations are given for the nonlinear problems (the Morse and double well oscillators, and the Henon-Heiles system). We also discuss the difficulties in implementation of a similar technique for electronic problems.
Culture–gene coevolution of individualism–collectivism and the serotonin transporter gene
Chiao, Joan Y.; Blizinsky, Katherine D.
2010-01-01
Culture–gene coevolutionary theory posits that cultural values have evolved, are adaptive and influence the social and physical environments under which genetic selection operates. Here, we examined the association between cultural values of individualism–collectivism and allelic frequency of the serotonin transporter functional polymorphism (5-HTTLPR) as well as the role this culture–gene association may play in explaining global variability in prevalence of pathogens and affective disorders. We found evidence that collectivistic cultures were significantly more likely to comprise individuals carrying the short (S) allele of the 5-HTTLPR across 29 nations. Results further show that historical pathogen prevalence predicts cultural variability in individualism–collectivism owing to genetic selection of the S allele. Additionally, cultural values and frequency of S allele carriers negatively predict global prevalence of anxiety and mood disorder. Finally, mediation analyses further indicate that increased frequency of S allele carriers predicted decreased anxiety and mood disorder prevalence owing to increased collectivistic cultural values. Taken together, our findings suggest culture–gene coevolution between allelic frequency of 5-HTTLPR and cultural values of individualism–collectivism and support the notion that cultural values buffer genetically susceptible populations from increased prevalence of affective disorders. Implications of the current findings for understanding culture–gene coevolution of human brain and behaviour as well as how this coevolutionary process may contribute to global variation in pathogen prevalence and epidemiology of affective disorders, such as anxiety and depression, are discussed. PMID:19864286
NASA Astrophysics Data System (ADS)
Douglas, P. M.; Eiler, J. M.; Sessions, A. L.; Dawson, K.; Walter Anthony, K. M.; Smith, D. A.; Lloyd, M. K.; Yanay, E.
2016-12-01
Microbially produced methane is a globally important greenhouse gas, energy source, and biological substrate. Methane clumped isotope measurements have recently been developed as a new analytical tool for understanding the source of methane in different environments. When methane forms in isotopic equilibrium clumped isotope values are determined by formation temperature, but in many cases microbial methane clumped isotope values deviate strongly from expected equilibrium values. Indeed, we observe a very wide range of clumped isotope values in microbial methane, which are likely strongly influenced by kinetic isotope effects, but thus far the biological and environmental parameters controlling this variability are not understood. We will present data from both culture experiments and natural environments to explore patterns of variability in non-equilibrium clumped isotope values on temporal and spatial scales. In methanogen batch cultures sampled at different time points along a growth curve we observe significant variability in clumped isotope values, with values decreasing from early to late exponential growth. Clumped isotope values then increase during stationary growth. This result is consistent with previous work suggesting that differences in the reversibility of methanogenesis related to metabolic rates control non-equilibrium clumped isotope values. Within single lakes in Alaska and Sweden we observe substantial variability in clumped isotope values on the order of 5‰. Lower clumped isotope values are associated with larger 2H isotopic fractionation between water and methane, which is also consistent with a kinetic isotope effect determined by the reversibility of methanogenesis. Finally, we analyzed a time-series clumped isotope compositions of methane emitted from two seeps in an Alaskan lake over several months. Temporal variability in these seeps is on the order of 2‰, which is much less than the observed spatial variability within the lake. Comparing carbon isotope fractionation between CO2 and CH4 with clumped isotope data suggests the temporal variability may result from changes in methane oxidation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
This paper intends to reveal the ability of the linear interpolation method to predict missing values in solar radiation time series. Reliable dataset is equally tends to complete time series observed dataset. The absence or presence of radiation data alters long-term variation of solar radiation measurement values. Based on that change, the opportunities to provide bias output result for modelling and the validation process is higher. The completeness of the observed variable dataset has significantly important for data analysis. Occurrence the lack of continual and unreliable time series solar radiation data widely spread and become the main problematic issue. However,more » the limited number of research quantity that has carried out to emphasize and gives full attention to estimate missing values in the solar radiation dataset.« less
Kim, Jungmin; Park, Juyong; Lee, Wonjae
2018-01-01
The quality of life for people in urban regions can be improved by predicting urban human mobility and adjusting urban planning accordingly. In this study, we compared several possible variables to verify whether a gravity model (a human mobility prediction model borrowed from Newtonian mechanics) worked as well in inner-city regions as it did in intra-city regions. We reviewed the resident population, the number of employees, and the number of SNS posts as variables for generating mass values for an urban traffic gravity model. We also compared the straight-line distance, travel distance, and the impact of time as possible distance values. We defined the functions of urban regions on the basis of public records and SNS data to reflect the diverse social factors in urban regions. In this process, we conducted a dimension reduction method for the public record data and used a machine learning-based clustering algorithm for the SNS data. In doing so, we found that functional distance could be defined as the Euclidean distance between social function vectors in urban regions. Finally, we examined whether the functional distance was a variable that had a significant impact on urban human mobility.
Quantum anonymous voting with unweighted continuous-variable graph states
NASA Astrophysics Data System (ADS)
Guo, Ying; Feng, Yanyan; Zeng, Guihua
2016-08-01
Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.
A method for predicting optimized processing parameters for surfacing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupont, J.N.; Marder, A.R.
1994-12-31
Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less
NASA Astrophysics Data System (ADS)
González-Dávila, M.; Santana-González, C.; Santana-Casiano, J. M.
2017-12-01
The eruptive process that took place in October 2011 in the submarine volcano Tagoro off the Island of El Hierro (Canary Island) and the subsequent degasification stage, five months later, have increased the concentration of TdFe(II) (Total dissolved iron(II)) in the waters nearest to the volcanic edifice. In order to detect any variation in concentrations of TdFe(II) due to hydrothermal emissions, three cruises were carried out two years after the eruptive process in October 2013, March 2014, May 2015, March 2016 and November 2016. The results from these cruises confirmed important positive anomalies in TdFe(II), which coincided with negatives anomalies in pHF,is (pH in free scale, at in situ conditions) located in the proximity of the main cone. Maximum values in TdFe(II) both at the surface, associated to chlorophyll a maximum, and at the sea bottom, were also observed, showing the important influence of organic complexation and particle re-suspension processes. Temporal variability studies were carried out over periods ranging from hours to days in the stations located over the main and two secondary cones in the volcanic edifice with positive anomalies in TdFe(II) concentrations and negative anomalies in pHF,is values. Observations showed an important variability in both pHF,is and TdFe(II) concentrations, which indicated the volcanic area was affected by a degasification process that remained in the volcano after the eruptive phase had ceased. Fe(II) oxidation kinetic studies were also undertaken in order to analyze the effects of the seawater properties in the proximities of the volcano on the oxidation rate constants and t1/2 (half-life time) of ferrous iron. The increased TdFe(II) concentrations and the low associated pHF,is values acted as an important fertilization event in the seawater around the Tagoro volcano at the Island of El Hierro providing optimal conditions for the regeneration of the area.
NASA Astrophysics Data System (ADS)
Gaunt, H. E.; Bernard, B.; Hidalgo, S.; Proaño, A.; Wright, H. M. N.; Mothes, P. A.; Criollo, E.
2016-12-01
The eruptive process that took place in October 2011 in the submarine volcano Tagoro off the Island of El Hierro (Canary Island) and the subsequent degasification stage, five months later, have increased the concentration of TdFe(II) (Total dissolved iron(II)) in the waters nearest to the volcanic edifice. In order to detect any variation in concentrations of TdFe(II) due to hydrothermal emissions, three cruises were carried out two years after the eruptive process in October 2013, March 2014, May 2015, March 2016 and November 2016. The results from these cruises confirmed important positive anomalies in TdFe(II), which coincided with negatives anomalies in pHF,is (pH in free scale, at in situ conditions) located in the proximity of the main cone. Maximum values in TdFe(II) both at the surface, associated to chlorophyll a maximum, and at the sea bottom, were also observed, showing the important influence of organic complexation and particle re-suspension processes. Temporal variability studies were carried out over periods ranging from hours to days in the stations located over the main and two secondary cones in the volcanic edifice with positive anomalies in TdFe(II) concentrations and negative anomalies in pHF,is values. Observations showed an important variability in both pHF,is and TdFe(II) concentrations, which indicated the volcanic area was affected by a degasification process that remained in the volcano after the eruptive phase had ceased. Fe(II) oxidation kinetic studies were also undertaken in order to analyze the effects of the seawater properties in the proximities of the volcano on the oxidation rate constants and t1/2 (half-life time) of ferrous iron. The increased TdFe(II) concentrations and the low associated pHF,is values acted as an important fertilization event in the seawater around the Tagoro volcano at the Island of El Hierro providing optimal conditions for the regeneration of the area.
Effect of initial bulk density on high-solids anaerobic digestion of MSW: General mechanism.
Caicedo, Luis M; Wang, Hongtao; Lu, Wenjing; De Clercq, Djavan; Liu, Yanjun; Xu, Sai; Ni, Zhe
2017-06-01
Initial bulk density (IBD) is an important variable in anaerobic digestion since it defines and optimizes the treatment capacity of a system. This study reveals the mechanism on how IBD might affect anaerobic digestion of waste. Four different IBD values: D 1 (500-700kgm -3 ), D 2 (900-1000kgm -3 ), D 3 (1100-1200kgm -3 ) and D 4 (1200-1400kgm -3 ) were set and tested over a period of 90days in simulated landfill reactors. The main variables affected by the IBD are the methane generation, saturation degree, extraction of organic matter, and the total population of methanogens. The study identified that IBD >1000kgm -3 may have significant effect on methane generation, either prolonging the lag time or completely inhibiting the process. This study provides a new understanding of the anaerobic digestion process in saturated high-solids systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
A nonlinear quality-related fault detection approach based on modified kernel partial least squares.
Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen
2017-01-01
In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raj, Rahul; van der Tol, Christiaan; Hamm, Nicholas Alexander Samuel; Stein, Alfred
2018-01-01
Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP) data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT), ratio of fine root carbon to leaf carbon (FRC : LC), ratio of carbon to nitrogen in leaf (C : Nleaf), canopy water interception coefficient (Wint), fraction of leaf nitrogen in RuBisCO (FLNR), and effective soil rooting depth (SD) characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash-Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.
Predictive displays for a process-control schematic interface.
Yin, Shanqing; Wickens, Christopher D; Helander, Martin; Laberge, Jason C
2015-02-01
Our objective was to examine the extent to which increasing precision of predictive (rate of change) information in process control will improve performance on a simulated process-control task. Predictive displays have been found to be useful in process control (as well as aviation and maritime industries). However, authors of prior research have not examined the extent to which predictive value is increased by increasing predictor resolution, nor has such research tied potential improvements to changes in process control strategy. Fifty nonprofessional participants each controlled a simulated chemical mixture process (honey mixer simulation) that simulated the operations found in process control. Participants in each of five groups controlled with either no predictor or a predictor ranging in the resolution of prediction of the process. Increasing detail resolution generally increased the benefit of prediction over the control condition although not monotonically so. The best overall performance, combining quality and predictive ability, was obtained by the display of intermediate resolution. The two displays with the lowest resolution were clearly inferior. Predictors with higher resolution are of value but may trade off enhanced sensitivity to variable change (lower-resolution discrete state predictor) with smoother control action (higher-resolution continuous predictors). The research provides guidelines to the process-control industry regarding displays that can most improve operator performance.
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. © The Author(s) 2015.
Time Resolved Spectroscopy of Cepheid Variable Stars
NASA Astrophysics Data System (ADS)
Hartman, Katherine; Beaton, Rachael L.; SDSS-IV APOGEE-2 Team
2018-01-01
Galactic Cepheid variable stars have been used for over a century as standard candles and as the first rung of the cosmic distance ladder, integral to the calculation of the Hubble constant. However, it is challenging to observe Cepheids within the Milky Way Galaxy because of extinction, and there are still uncertainties in the Cepheid period-luminosity relation (or Leavitt Law) that affect these important distance calculations. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey has provided spectra for a large sample of Galactic Cepheids, but the standard chemical abundance pipeline (ASPCAP) processing is not well-suited to pulsational variables, preventing us from using them to study metallicity effect in the Leavitt Law with standard processing. Using a standalone version of the ASPCAP pipeline, we present an analysis of individual visit spectra from a test sample of nine APOGEE Cepheids, and we compare its output to the stars’ literature abundance values. Based on the results of this comparison, we will be able to improve the standard analysis and process the entirety of APOGEE’s Cepheid catalogue to improve its abundance measurements. The resulting abundance data will allow us to constrain the effect of metallicity on the Leavitt Law and thus allow for more accurate Cepheid distance measurements for the determination of the Hubble constant.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
NASA Astrophysics Data System (ADS)
Sapucci, L. F.; Monico, J. G.; Machado, L. T.
2007-05-01
In 2010 a new navigation and administration system of the air traffic, denominated CNS-ATM (Communication Navigation Surveillance - Air Traffic Management) should be running operationally in South America. This new system will basically employ the positioning techniques by satellites to the management and air traffic control. However, the efficiency of this new system demands the knowledge of the behavior of the atmosphere, consequently, an appropriated Zenithal Tropospheric Delay (ZTD) modeling in a regional scale. The predictions of ZTD values from Numeric Weather Prediction (NWP), denominated here dynamic modeling, is an alternative to model the atmospheric gases effects in the radio-frequency signals in real time. Brazilian Center for Weather Forecasting and Climate Studies (CPTEC) of the National Institute for Space Research (INPE), jointly with researchers from UNESP (Sao Paulo State University), has generated operationally prediction of ZTD values to South America Continent (available in the electronic address http:satelite.cptec.inpe.br/htmldocs/ztd/zenithal.htm). The available regional version is obtained using ETA model (NWP model with horizontal resolution of 20 km and 42 levels in the vertical). The application of NWP permit assess the temporal and spatial variation of ZTD values, which is an important characteristic of this techniques. The aim of the present paper is to investigate the ZTD seasonal variability over South America continent. A variability analysis of the ZTD components [hydrostatic(ZHD) and wet(ZWD)] is also presented, as such as discussion of main factors that influence this variation in this region. The hydrostatic component variation is related with atmospheric pressure oscillation, which is influenced by relief and high pressure centers that prevail over different region of the South America continent. The wet component oscillation is due to the temperature and humidity variability, which is also influenced by relief and by synoptic events like: the penetration the cold front from Antarctic pole into the continent and occurrence of humidity convergence zones. In South America there are two main convergence zones that has strong influence in the troposphere variability, the ITCZ (Inter Tropical Convergence Zone) and the SACZ (South Atlantic Convergence Zone) zones. These convergence zones are characterized by an extensive precipitation band and high nebulosity almost stationary. The physical processes associated with these convergence zones present strong impacts in the variability of ZWD values. This work aims to contribute with ZTD modeling over South America continent using NWP to identify where and when the ZTD values present lower predictability in this region, and consequently, minimizing the error in the GNSS positioning that apply this technique.
NASA Astrophysics Data System (ADS)
Goris, N.; Elbern, H.
2015-12-01
Measurements of the large-dimensional chemical state of the atmosphere provide only sparse snapshots of the state of the system due to their typically insufficient temporal and spatial density. In order to optimize the measurement configurations despite those limitations, the present work describes the identification of sensitive states of the chemical system as optimal target areas for adaptive observations. For this purpose, the technique of singular vector analysis (SVA), which has proven effective for targeted observations in numerical weather prediction, is implemented in the EURAD-IM (EURopean Air pollution and Dispersion - Inverse Model) chemical transport model, yielding the EURAD-IM-SVA v1.0. Besides initial values, emissions are investigated as critical simulation controlling targeting variables. For both variants, singular vectors are applied to determine the optimal placement for observations and moreover to quantify which chemical compounds have to be observed with preference. Based on measurements of the airship based ZEPTER-2 campaign, the EURAD-IM-SVA v1.0 has been evaluated by conducting a comprehensive set of model runs involving different initial states and simulation lengths. For the sake of brevity, we concentrate our attention on the following chemical compounds, O3, NO, NO2, HCHO, CO, HONO, and OH, and focus on their influence on selected O3 profiles. Our analysis shows that the optimal placement for observations of chemical species is not entirely determined by mere transport and mixing processes. Rather, a combination of initial chemical concentrations, chemical conversions, and meteorological processes determines the influence of chemical compounds and regions. We furthermore demonstrate that the optimal placement of observations of emission strengths is highly dependent on the location of emission sources and that the benefit of including emissions as target variables outperforms the value of initial value optimization with growing simulation length. The obtained results confirm the benefit of considering both initial values and emission strengths as target variables and of applying the EURAD-IM-SVA v1.0 for measurement decision guidance with respect to chemical compounds.
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
Adsorption of sunset yellow FCF from aqueous solution by chitosan-modified diatomite.
Zhang, Y Z; Li, J; Li, W J; Li, Y
2015-01-01
Sunset yellow (SY) FCF is a hazardous azo dye pollutant found in food processing effluent. This study investigates the use of diatomaceous earth with chitosan (DE@C) as a modified adsorbent for the removal of SY from wastewater. Fourier transform infrared spectroscopy results indicate the importance of functional groups during the adsorption of SY. The obtained N2 adsorption-desorption isotherm values accord well with IUPAC type II. Our calculations determined a surface area of 69.68 m2 g(-1) for DE@C and an average pore diameter of 4.85 nm. Using response surface methodology, optimized conditions of process variables for dye adsorption were achieved. For the adsorption of SY onto DE@C, this study establishes mathematical models for the optimization of pH, contact time and initial dye concentration. Contact time plays a greater role in the adsorption process than either pH or initial dye concentration. According to the adjusted correlation coefficient (adj-R2>0.97), the models used here are suitable for illustration of the adsorption process. Theoretical experimental conditions included a pH of 2.40, initial dye concentration of 113 mg L(-1) and 30.37 minutes of contact time. Experimental values for the adsorption rate (92.54%) were close to the values predicted by the models (95.29%).
Romero-Cortes, Teresa; Salgado-Cervantes, Marco Antonio; García-Alamilla, Pedro; García-Alvarado, Miguel Angel; Rodríguez-Jimenes, Guadalupe del C; Hidalgo-Morales, Madeleine; Robles-Olvera, Víctor
2013-08-15
During traditional cocoa processing, the end of fermentation is empirically determined by the workers; consequently, a high variability on the quality of fermented cocoa beans is observed. Some physicochemical properties (such as fermentation index) have been used to measure the degree of fermentation and changes in quality, but only after the fermentation process has concluded, using dried cocoa beans. This would suggest that it is necessary to establish a relationship between the chemical changes inside the cocoa bean and the fermentation conditions during the fermentation in order to standardize the process. Cocoa beans were traditionally fermented inside wooden boxes, sampled every 24 h and analyzed to evaluate fermentation changes in complete bean, cotyledon and dried beans. The value of the fermentation index suggested as the minimal adequate (≥1) was observed at 72 h in all bean parts analyzed. At this time, values of pH, spectral absorption, total protein hydrolysis and vicilin-class globulins of fermented beans suggested that they were well fermented. Since no difference was found between the types of samples, the pH value could be used as a first indicator of the end of the fermentation and confirmed by evaluation of the fermentation index using undried samples, during the process. © 2013 Society of Chemical Industry.
Ruiz-Cooley, Rocio I.; Koch, Paul L.; Fiedler, Paul C.; McCarthy, Matthew D.
2014-01-01
Climatic variation alters biochemical and ecological processes, but it is difficult both to quantify the magnitude of such changes, and to differentiate long-term shifts from inter-annual variability. Here, we simultaneously quantify decade-scale isotopic variability at the lowest and highest trophic positions in the offshore California Current System (CCS) by measuring δ15N and δ13C values of amino acids in a top predator, the sperm whale (Physeter macrocephalus). Using a time series of skin tissue samples as a biological archive, isotopic records from individual amino acids (AAs) can reveal the proximate factors driving a temporal decline we observed in bulk isotope values (a decline of ≥1 ‰) by decoupling changes in primary producer isotope values from those linked to the trophic position of this toothed whale. A continuous decline in baseline (i.e., primary producer) δ15N and δ13C values was observed from 1993 to 2005 (a decrease of ∼4‰ for δ15N source-AAs and 3‰ for δ13C essential-AAs), while the trophic position of whales was variable over time and it did not exhibit directional trends. The baseline δ15N and δ13C shifts suggest rapid ongoing changes in the carbon and nitrogen biogeochemical cycling in the offshore CCS, potentially occurring at faster rates than long-term shifts observed elsewhere in the Pacific. While the mechanisms forcing these biogeochemical shifts remain to be determined, our data suggest possible links to natural climate variability, and also corresponding shifts in surface nutrient availability. Our study demonstrates that isotopic analysis of individual amino acids from a top marine mammal predator can be a powerful new approach to reconstructing temporal variation in both biochemical cycling and trophic structure. PMID:25329915
Ruiz-Cooley, Rocio I; Koch, Paul L; Fiedler, Paul C; McCarthy, Matthew D
2014-01-01
Climatic variation alters biochemical and ecological processes, but it is difficult both to quantify the magnitude of such changes, and to differentiate long-term shifts from inter-annual variability. Here, we simultaneously quantify decade-scale isotopic variability at the lowest and highest trophic positions in the offshore California Current System (CCS) by measuring δ15N and δ13C values of amino acids in a top predator, the sperm whale (Physeter macrocephalus). Using a time series of skin tissue samples as a biological archive, isotopic records from individual amino acids (AAs) can reveal the proximate factors driving a temporal decline we observed in bulk isotope values (a decline of ≥1 ‰) by decoupling changes in primary producer isotope values from those linked to the trophic position of this toothed whale. A continuous decline in baseline (i.e., primary producer) δ15N and δ13C values was observed from 1993 to 2005 (a decrease of ∼4‰ for δ15N source-AAs and 3‰ for δ13C essential-AAs), while the trophic position of whales was variable over time and it did not exhibit directional trends. The baseline δ15N and δ13C shifts suggest rapid ongoing changes in the carbon and nitrogen biogeochemical cycling in the offshore CCS, potentially occurring at faster rates than long-term shifts observed elsewhere in the Pacific. While the mechanisms forcing these biogeochemical shifts remain to be determined, our data suggest possible links to natural climate variability, and also corresponding shifts in surface nutrient availability. Our study demonstrates that isotopic analysis of individual amino acids from a top marine mammal predator can be a powerful new approach to reconstructing temporal variation in both biochemical cycling and trophic structure.
Coogan, Matthew A; Karash, Karla H; Adler, Thomas; Sallis, James
2007-01-01
To examine the association of personal values, the built environment, and auto availability with walking for transportation. Participants were drawn from 11 U.S. metropolitan areas with good transit services. 865 adults who had recently made or were contemplating making a residential move. Respondents reported if walking was their primary mode for nine trip purposes. "Personal values" reflected ratings of 15 variables assessing attitudes about urban and environmental attributes, with high reliability (ot = 0.85). Neighborhood form was indicated by a three-item scale. Three binary variables were created to reflect (1) personal values, (2) neighborhood form, and (3) auto availability. The association with walking was reported for each of the three variables, each combination of two variables, and the combination of three variables. An analysis of covariance was applied, and a hierarchic linear regression model was developed. All three variables were associated with walking, and all three variables interacted. The standardized coefficients were 0.23for neighborhood form, 0.21 for autos per person, and 0.18 for personal values. Positive attitudes about urban attributes, living in a supportive neighborhood, and low automobile availability significantly predicted more walking for transportation. A framework for further research is proposed in which a factor representing the role of the automobile is examined explicitly in addition to personal values and urban form.
Dodge, Hiroko H; Zhu, Jian; Harvey, Danielle; Saito, Naomi; Silbert, Lisa C; Kaye, Jeffrey A; Koeppe, Robert A; Albin, Roger L
2014-11-01
It is unknown which commonly used Alzheimer disease (AD) biomarker values-baseline or progression-best predict longitudinal cognitive decline. 526 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). ADNI composite memory and executive scores were the primary outcomes. Individual-specific slope of the longitudinal trajectory of each biomarker was first estimated. These estimates and observed baseline biomarker values were used as predictors of cognitive declines. Variability in cognitive declines explained by baseline biomarker values was compared with variability explained by biomarker progression values. About 40% of variability in memory and executive function declines was explained by ventricular volume progression among mild cognitive impairment patients. A total of 84% of memory and 65% of executive function declines were explained by fluorodeoxyglucose positron emission tomography (FDG-PET) score progression and ventricular volume progression, respectively, among AD patients. For most biomarkers, biomarker progressions explained higher variability in cognitive decline than biomarker baseline values. This has important implications for clinical trials targeted to modify AD biomarkers. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Resing, Wilma C M; Bakker, Merel; Pronk, Christine M E; Elliott, Julian G
2017-01-01
The current study investigated developmental trajectories of analogical reasoning performance of 104 7- and 8-year-old children. We employed a microgenetic research method and multilevel analysis to examine the influence of several background variables and experimental treatment on the children's developmental trajectories. Our participants were divided into two treatment groups: repeated practice alone and repeated practice with training. Each child received an initial working memory assessment and was subsequently asked to solve figural analogies on each of several sessions. We examined children's analogical problem-solving behavior and their subsequent verbal accounts of their employed solving processes. We also investigated the influence of verbal and visual-spatial working memory capacity and initial variability in strategy use on analogical reasoning development. Results indicated that children in both treatment groups improved but that gains were greater for those who had received training. Training also reduced the influence of children's initial variability in the use of analogical strategies with the degree of improvement in reasoning largely unrelated to working memory capacity. Findings from this study demonstrate the value of a microgenetic research method and the use of multilevel analysis to examine inter- and intra-individual change in problem-solving processes. Copyright © 2016 Elsevier Inc. All rights reserved.
Learning an intrinsic-variable preserving manifold for dynamic visual tracking.
Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu
2010-06-01
Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; de Azevedo Mello, Paola; Ferrão, Marco Flores; de Fátima Pereira dos Santos, Maria; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm(-1)). This model produced a RMSECV of 400 mg kg(-1) S and RMSEP of 420 mg kg(-1) S, showing a correlation coefficient of 0.990. Copyright © 2011 Elsevier B.V. All rights reserved.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried
2013-10-01
To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.
Trumble, Troy N; Billinghurst, R Clark; McIlwraith, C Wayne
2004-09-01
To evaluate the temporal pattern of prostaglandin (PG) E2 concentrations in synovial fluid after transection of the cranial cruciate ligament (CCL) in dogs and to correlate PGE2 concentrations with ground reaction forces and subjective clinical variables for lameness or pain. 19 purpose-bred adult male Walker Hounds. Force plate measurements, subjective clinical analysis of pain or lameness, and samples of synovial fluid were obtained before (baseline) and at various time points after arthroscopic transection of the right CCL. Concentrations of PGE2 were measured in synovial fluid samples, and the PGE2 concentrations were correlated with ground reaction forces and clinical variables. The PGE2 concentration increased significantly above the baseline value throughout the entire study, peaking 14 days after transection. Peak vertical force and vertical impulse significantly decreased by day 14 after transection, followed by an increase over time without returning to baseline values. All clinical variables (eg, lameness, degree of weight bearing, joint extension, cumulative pain score, effusion score, and total protein content of synovial fluid, except for WBC count in synovial fluid) increased significantly above baseline values. Significant negative correlations were detected between PGE2 concentrations and peak vertical force (r, -0.5720) and vertical impulse (r, -0.4618), and significant positive correlations were detected between PGE2 concentrations and the subjective lameness score (r, 0.5016) and effusion score (r, 0.6817). Assessment of the acute inflammatory process by measurement of PGE2 concentrations in synovial fluid may be correlated with the amount of pain or lameness in dogs.
NASA Astrophysics Data System (ADS)
Tedesco, M.; Datta, R.; Fettweis, X.; Agosta, C.
2015-12-01
Surface-layer snow density is important to processes contributing to surface mass balance, but is highly variable over Antarctica due to a wide range of near-surface climate conditions over the continent. Formulations for fresh snow density have typically either used fixed values or been modeled empirically using field data that is limited to specific seasons or regions. There is also currently limited work exploring how the sensitivity to fresh snow density in regional climate models varies with resolution. Here, we present a new formulation compiled from (a) over 1600 distinct density profiles from multiple sources across Antarctica and (b) near-surface variables from the regional climate model Modèle Atmosphérique Régionale (MAR). Observed values represent coastal areas as well as the plateau, in both West and East Antarctica (although East Antarctica is dominant). However, no measurements are included from the Antarctic Peninsula, which is both highly topographically variable and extends to lower latitudes than the remainder of the continent. In order to assess the applicability of this fresh snow density formulation to the Antarctic Peninsula at high resolutions, a version of MAR is run for several years both at low-resolution at the continental scale and at a high resolution for the Antarctic Peninsula alone. This setup is run both with and without the new fresh density formulation to quantify the sensitivity of the energy balance and SMB components to fresh snow density. Outputs are compared with near-surface atmospheric variables available from AWS stations (provided by the University of Wisconsin Madison) as well as net accumulation values from the SAMBA database (provided from the Laboratoire de Glaciologie et Géophysique de l'Environnement).
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.
2013-12-01
Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.
Stanley J. Zarnoch; H. Ken Cordell; Carter J. Betz; John C. Bergstrom
2010-01-01
Multiple imputation is used to create values for missing family income data in the National Survey on Recreation and the Environment. We present an overview of the survey and a description of the missingness pattern for family income and other key variables. We create a logistic model for the multiple imputation process and to impute data sets for family income. We...
ERIC Educational Resources Information Center
Mirjalili, Seyyed Mohammad Ali; Abari, Ahmad Ali Foroughi; Gholizadeh, Azar; Yarmohammadian, M. Hossein
2016-01-01
Life in a society requires the acceptance of some restrictions and rules that they applied to the individuals from the community and other social organizations. So, it can be said that socialization is a process that citizens, by its help, learns the values, beliefs and behavior standards which social environment has expected of them. In order to,…
Matthew P. Peters; Louis R. Iverson; Anantha M. Prasad; Steve N. Matthews
2013-01-01
Fine-scale soil (SSURGO) data were processed at the county level for 37 states within the eastern United States, initially for use as predictor variables in a species distribution model called DISTRIB II. Values from county polygon files converted into a continuous 30-m raster grid were aggregated to 4-km cells and integrated with other environmental and site condition...
Newgard, Craig; Malveau, Susan; Staudenmayer, Kristan; Wang, N. Ewen; Hsia, Renee Y.; Mann, N. Clay; Holmes, James F.; Kuppermann, Nathan; Haukoos, Jason S.; Bulger, Eileen M.; Dai, Mengtao; Cook, Lawrence J.
2012-01-01
Objectives The objective was to evaluate the process of using existing data sources, probabilistic linkage, and multiple imputation to create large population-based injury databases matched to outcomes. Methods This was a retrospective cohort study of injured children and adults transported by 94 emergency medical systems (EMS) agencies to 122 hospitals in seven regions of the western United States over a 36-month period (2006 to 2008). All injured patients evaluated by EMS personnel within specific geographic catchment areas were included, regardless of field disposition or outcome. The authors performed probabilistic linkage of EMS records to four hospital and postdischarge data sources (emergency department [ED] data, patient discharge data, trauma registries, and vital statistics files) and then handled missing values using multiple imputation. The authors compare and evaluate matched records, match rates (proportion of matches among eligible patients), and injury outcomes within and across sites. Results There were 381,719 injured patients evaluated by EMS personnel in the seven regions. Among transported patients, match rates ranged from 14.9% to 87.5% and were directly affected by the availability of hospital data sources and proportion of missing values for key linkage variables. For vital statistics records (1-year mortality), estimated match rates ranged from 88.0% to 98.7%. Use of multiple imputation (compared to complete case analysis) reduced bias for injury outcomes, although sample size, percentage missing, type of variable, and combined-site versus single-site imputation models all affected the resulting estimates and variance. Conclusions This project demonstrates the feasibility and describes the process of constructing population-based injury databases across multiple phases of care using existing data sources and commonly available analytic methods. Attention to key linkage variables and decisions for handling missing values can be used to increase match rates between data sources, minimize bias, and preserve sampling design. PMID:22506952
NASA Astrophysics Data System (ADS)
Lyssenko, Nikita; Martínez-Espiñeira, Roberto
2012-11-01
Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).
Lyssenko, Nikita; Martínez-Espiñeira, Roberto
2012-11-01
Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Brindle, Ryan C; Duggan, Katherine A; Cribbet, Matthew R; Kline, Christopher E; Krafty, Robert T; Thayer, Julian F; Mulukutla, Suresh R; Hall, Martica H
2018-04-01
Exaggerated cardiovascular reactivity to acute psychological stress has been associated with increased carotid intima-media thickness (IMT). However, interstudy variability in this relationship suggests the presence of moderating factors. The current study aimed to test the hypothesis that poor nocturnal sleep, defined as short total sleep time or low slow-wave sleep, would moderate the relationship between cardiovascular reactivity and IMT. Participants (N = 99, 65.7% female, age = 59.3 ± 9.3 years) completed a two-night laboratory sleep study and cardiovascular examination where sleep and IMT were measured. The multisource interference task was used to induce acute psychological stress, while systolic and diastolic blood pressure and heart rate were monitored. Moderation was tested using the PROCESS framework in SPSS. Slow-wave sleep significantly moderated the relationship between all cardiovascular stress reactivity variables and IMT (all pinteraction ≤ .048, all ΔRinteraction ≥ .027). Greater stress reactivity was associated with higher IMT values in the low slow-wave sleep group and lower IMT values in the high slow-wave sleep group. No moderating effects of total sleep time were observed. The results provide evidence that nocturnal slow-wave sleep moderates the relationship between cardiovascular stress reactivity and IMT and may buffer the effect of daytime stress-related disease processes.
A comparative study of monoclonal antibodies. 1. Phase behavior and protein-protein interactions
Lewus, Rachael A.; Levy, Nicholas E.; Lenhoff, Abraham M.; Sandler, Stanley I.
2018-01-01
Protein phase behavior is involved in numerous aspects of downstream processing, either by design as in crystallization or precipitation processes, or as an undesired effect, such as aggregation. This work explores the phase behavior of eight monoclonal antibodies (mAbs) that exhibit liquid-liquid separation, aggregation, gelation, and crystallization. The phase behavior has been studied systematically as a function of a number of factors, including solution composition and pH, in order to explore the degree of variability among different antibodies. Comparisons of the locations of phase boundaries show consistent trends as a function of solution composition; however, changing the solution pH has different effects on each of the antibodies studied. Furthermore, the types of dense phases formed varied among the antibodies. Protein-protein interactions, as reflected by values of the osmotic second virial coefficient, are used to correlate the phase behavior. The primary findings are that values of the osmotic second virial coefficient are useful for correlating phase boundary locations, though there is appreciable variability among the antibodies in the apparent strengths of the intrinsic protein-protein attraction manifested. However, the osmotic second virial coefficient does not provide a clear basis to predict the type of dense phase likely to result under a given set of solution conditions. PMID:25378269
Asymptotic Equivalence of Probability Measures and Stochastic Processes
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2018-03-01
Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.
Rethinking fast and slow based on a critique of reaction-time reverse inference
Krajbich, Ian; Bartling, Björn; Hare, Todd; Fehr, Ernst
2015-01-01
Do people intuitively favour certain actions over others? In some dual-process research, reaction-time (RT) data have been used to infer that certain choices are intuitive. However, the use of behavioural or biological measures to infer mental function, popularly known as ‘reverse inference', is problematic because it does not take into account other sources of variability in the data, such as discriminability of the choice options. Here we use two example data sets obtained from value-based choice experiments to demonstrate that, after controlling for discriminability (that is, strength-of-preference), there is no evidence that one type of choice is systematically faster than the other. Moreover, using specific variations of a prominent value-based choice experiment, we are able to predictably replicate, eliminate or reverse previously reported correlations between RT and selfishness. Thus, our findings shed crucial light on the use of RT in inferring mental processes and strongly caution against using RT differences as evidence favouring dual-process accounts. PMID:26135809
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
NASA Astrophysics Data System (ADS)
Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.
2010-01-01
Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.
Rethinking fast and slow based on a critique of reaction-time reverse inference.
Krajbich, Ian; Bartling, Björn; Hare, Todd; Fehr, Ernst
2015-07-02
Do people intuitively favour certain actions over others? In some dual-process research, reaction-time (RT) data have been used to infer that certain choices are intuitive. However, the use of behavioural or biological measures to infer mental function, popularly known as 'reverse inference', is problematic because it does not take into account other sources of variability in the data, such as discriminability of the choice options. Here we use two example data sets obtained from value-based choice experiments to demonstrate that, after controlling for discriminability (that is, strength-of-preference), there is no evidence that one type of choice is systematically faster than the other. Moreover, using specific variations of a prominent value-based choice experiment, we are able to predictably replicate, eliminate or reverse previously reported correlations between RT and selfishness. Thus, our findings shed crucial light on the use of RT in inferring mental processes and strongly caution against using RT differences as evidence favouring dual-process accounts.
Muñoz, Antonio Jesús; Espínola, Francisco; Moya, Manuel; Ruiz, Encarnación
2015-01-01
Lead biosorption by Klebsiella sp. 3S1 isolated from a wastewater treatment plant was investigated through a Rotatable Central Composite Experimental Design. The optimisation study indicated the following optimal values of operating variables: 0.4 g/L of biosorbent dosage, pH 5, and 34°C. According to the results of the kinetic studies, the biosorption process can be described by a two-step process, one rapid, almost instantaneous, and one slower, both contributing significantly to the overall biosorption; the model that best fits the experimental results was pseudo-second order. The equilibrium studies showed a maximum lead uptake value of 140.19 mg/g according to the Langmuir model. The mechanism study revealed that lead ions were bioaccumulated into the cytoplasm and adsorbed on the cell surface. The bacterium Klebsiella sp. 3S1 has a good potential in the bioremoval of lead in an inexpensive and effective process. PMID:26504824
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
Progress with lossy compression of data from the Community Earth System Model
NASA Astrophysics Data System (ADS)
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
NASA Astrophysics Data System (ADS)
Krofcheck, D. J.; Morillas, L.; Litvak, M. E.
2014-12-01
Drylands and semi-arid ecosystems cover over 45% of the global landmass. These biomes have been shown to be extremely sensitive to changes in climate, specifically decreases in precipitation and increases in air temperature. Therefore, inter-annual variability in climate has the potential to dramatically impact the carbon budget at regional and global scales. In the Southwestern US, we are in a unique position to investigate these relationships by leveraging eight years of data from the New Mexico Elevation Gradient (NMEG), eight flux towers that span six representative biomes across the semi-arid Southwest. From C4 desert grasslands to subalpine mixed conifer forests, the NMEG flux towers use identical instrumentsand processing, and afford a unique opportunity to explore patterns in biome-specific ecosystem processes and climate sensitivity. Over the last eight years the gradient has experienced climatic variability that span from wet years to an episodic megadrought. Here we report the effects of this extreme inter-annual variability in climate on the ability of semi-arid ecosystems to cycle and store energy and carbon. We also investigated biome-specific patterns of ecosystem light and water use efficiency during a series of wet and dry years, and how these vary in response to air temperature, vapor pressure deficit, evaporative fraction, and precipitation. Our initial results suggest that significant drought reduced the maximum ecosystem assimilation of carbon most at the C4 grasslands, creosote shrublands, juniper savannas, and ponderosa pine forests, with 60%, 50%, 35%, and 50% reduction respectively, relative to a wet year. Ecosystem light use efficiency tends to show the highest maximum values at the low elevation sites as a function of water availability, with the highest annual values consistently at the middle elevation and ponderosa pine sites. Water use efficiency was strongly biome dependent with the middle elevation sites showing the highest efficiencies, and the greatest within year variability at the lower elevation sites, with strong sensitivities to vapor pressure deficit. By quantifying the biome-specific ecosystem processes and functional responses, this network provides valuable insight about how vulnerable this range of semi-arid ecosystems is to future climate scenarios.
Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter
2016-05-01
Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.
Evaluation of Dental Shade Guide Variability Using Cross-Polarized Photography.
Gurrea, Jon; Gurrea, Marta; Bruguera, August; Sampaio, Camila S; Janal, Malvin; Bonfante, Estevam; Coelho, Paulo G; Hirata, Ronaldo
2016-01-01
This study evaluated color variability in the A hue between the VITA Classical (VITA Zahnfabrik) shade guide and four other VITA-coded ceramic shade guides using a Canon EOS 60D camera and software (Photoshop CC, Adobe). A total of 125 photographs were taken, 5 per shade tab for each of 5 shades (A1 to A4) from the following shade guides: VITA Classical (control), IPS e.max Ceram (Ivoclar Vivadent), IPS d.SIGN (Ivoclar Vivadent), Initial ZI (GC), and Creation CC (Creation Willi Geller). Photos were processed with Adobe Photoshop CC to allow standardized evaluation of hue, chroma, and value between shade tabs. None of the VITA-coded shade tabs fully matched the VITA Classical shade tab for hue, chroma, or value. The VITA-coded shade guides evaluated herein showed an overall unmatched shade in all tabs when compared with the control, suggesting that shade selection should be made using the guide produced by the manufacturer of the ceramic intended for the final restoration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Q; Xie, S
This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutionsmore » for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.« less
Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model
Xu, Shiguo; Wang, Tianxiang; Hu, Suduan
2015-01-01
Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998
Modeling temperature variations in a pilot plant thermophilic anaerobic digester.
Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max
2011-05-01
A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.
Players' perceptions of accountability factors in secondary school sports settings.
Hastie, P A
1993-06-01
The purpose of this study was to gauge the extent to which students believed that the accountability strategies employed by their coaches had significant effects on their involvement in sports training sessions. Questionnaire data from 235 secondary school athletes were analyzed using linear structural relations to test a model of accountability hypothesized as operating in these coaching settings. The accountability strategy of active instruction was found to be a variable that significantly affected the students' valuing of their coaches as well as their task involvement. However, the rewards/consequences variable was not found to be a predictor of valuing or task involvement, suggesting that these athletes seemed more task oriented than reliant on external sanctions. The results of this study can only be generalized to team sport settings. Detailed examination needs to be made of the processes through which accountability factors operate for other contexts, including individual sports and competitive levels. Further research could also be undertaken into gender differences, especially in relation to the gender of coaches.
Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.
Xu, Shiguo; Wang, Tianxiang; Hu, Suduan
2015-02-16
Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.
Jerosch, J; Castro, W H; Assheuer, J
1992-09-01
In 4 fresh specimens and in 14 healthy volunteers we studied normal anatomy of the glenoid labrum by MRI. In a total of 124 patients we examined the shoulder joints by MRI. 69 patients had any kind of subacromial pathology. 55 patients showed a glenohumeral instability. All MRI findings were compared with the surgical findings during arthroscopy and during open surgery. 44 patients showed a recurrent anterior instability, 7 patients showed a multidirectional instability, 2 patients showed a posterior instability, and 2 patients presented acute anterior dislocation. We found significant variability in the labral shape as well as significant variability of anterior capsular attachment. The pathologic changes of the glenoid labrum were classified in four different types. In 78% we found a concomitant Hill-Sachs lesion of various diameter. 5 patients suffered from an additional complete rotator cuff tear. Compared to the intraoperative findings MRI had a sensitivity of 95%, a specificity of 94%, an accuracy of 94%, a positive predictive value of 91%, and a negative predictive value of 96% in detecting labral pathology. Presenting a high diagnostic value for detecting Bankart lesions, MRI may replace other diagnostic modalities like CT-arthrography.
Memory Effects on Movement Behavior in Animal Foraging
Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R. Andrew
2015-01-01
An individual’s choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems. PMID:26288228
Memory Effects on Movement Behavior in Animal Foraging.
Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R Andrew
2015-01-01
An individual's choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems.
NASA Astrophysics Data System (ADS)
Zhu, Yanli; Chen, Haiqiang
2017-05-01
In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.
Evolution of evaluation criteria in the College of American Pathologists Surveys.
Ross, J W
1988-04-01
This review of the evolution of evaluation criteria in the College of American Pathologists Survey and of theoretical grounds proposed for evaluation criteria explores the complex nature of the evaluation process. Survey professionals balance multiple variables to seek relevant and meaningful evaluations. These include the state of the art, the reliability of target values, the nature of available control materials, the perceived medical "nonusefulness" of the extremes of performance (good or poor), this extent of laboratory services provided, and the availability of scientific data and theory by which clinically relevant criteria of medical usefulness may be established. The evaluation process has consistently sought peer concensus, to stimulate improvement in state of the art, to increase medical usefulness, and to monitor the state of the art. Recent factors that are likely to promote change from peer group evaluation to fixed criteria evaluation are the high degree of proficiency in the state of the art for many analytes, accurate target values, increased knowledge of biologic variation, and the availability of statistical modeling techniques simulating biologic and diagnostic processes as well as analytic processes.
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
Continuous-cyclic variations in the b-value of the earthquake frequency-magnitude distribution
NASA Astrophysics Data System (ADS)
El-Isa, Z. H.
2013-10-01
Seismicity of the Earth ( M ≥ 4.5) was compiled from NEIC, IRIS and ISC catalogues and used to compute b-value based on various time windows. It is found that continuous cyclic b-variations occur on both long and short time scales, the latter being of much higher value and sometimes in excess of 0.7 of the absolute b-value. These variations occur not only yearly or monthly, but also daily. Before the occurrence of large earthquakes, b-values start increasing with variable gradients that are affected by foreshocks. In some cases, the gradient is reduced to zero or to a negative value a few days before the earthquake occurrence. In general, calculated b-values attain maxima 1 day before large earthquakes and minima soon after their occurrence. Both linear regression and maximum likelihood methods give correlatable, but variable results. It is found that an expanding time window technique from a fixed starting point is more effective in the study of b-variations. The calculated b-variations for the whole Earth, its hemispheres, quadrants and the epicentral regions of some large earthquakes are of both local and regional character, which may indicate that in such cases, the geodynamic processes acting within a certain region have a much regional effect within the Earth. The b-variations have long been known to vary with a number of local and regional factors including tectonic stresses. The results reported here indicate that geotectonic stress remains the most significant factor that controls b-variations. It is found that for earthquakes with M w ≥ 7, an increase of about 0.20 in the b-value implies a stress increase that will result in an earthquake with a magnitude one unit higher.
High-precision 41K/39K measurements by MC-ICP-MS indicate terrestrial variability of δ41K
Morgan, Leah; Santiago Ramos, Danielle P.; Davidheiser-Kroll, Brett; Faithfull, John; Lloyd, Nicholas S.; Ellam, Rob M.; Higgins, John A.
2018-01-01
Potassium is a major component in continental crust, the fourth-most abundant cation in seawater, and a key element in biological processes. Until recently, difficulties with existing analytical techniques hindered our ability to identify natural isotopic variability of potassium isotopes in terrestrial materials. However, measurement precision has greatly improved and a range of K isotopic compositions has now been demonstrated in natural samples. In this study, we present a new technique for high-precision measurement of K isotopic ratios using high-resolution, cold plasma multi-collector mass spectrometry. We apply this technique to demonstrate natural variability in the ratio of 41K to 39K in a diverse group of geological and biological samples, including silicate and evaporite minerals, seawater, and plant and animal tissues. The total range in 41K/39K ratios is ca. 2.6‰, with a long-term external reproducibility of 0.17‰ (2, N=108). Seawater and seawater-derived evaporite minerals are systematically enriched in 41K compared to silicate minerals by ca. 0.6‰, a result consistent with recent findings1, 2. Although our average bulk-silicate Earth value (-0.54‰) is indistinguishable from previously published values, we find systematic δ41K variability in some high-temperature sample suites, particularly those with evidence for the presence of fluids. The δ41K values of biological samples span a range of ca. 1.2‰ between terrestrial mammals, plants, and marine organisms. Implications of terrestrial K isotope variability for the atomic weight of K and K-based geochronology are discussed. Our results indicate that high-precision measurements of stable K isotopes, made using commercially available mass spectrometers, can provide unique insights into the chemistry of potassium in geological and biological systems.
NASA Technical Reports Server (NTRS)
Graves, M. E.; King, R. L.; Brown, S. C.
1973-01-01
Extreme values, median values, and nine percentile values are tabulated for eight meteorological variables at Cape Kennedy, Florida and at Vandenberg Air Force Base, California. The variables are temperature, relative humidity, station pressure, water vapor pressure, water vapor mixing ratio, density, and enthalpy. For each month eight hours are tabulated, namely, 0100, 0400, 0700, 1000, 1300, 1600, 1900, and 2200 local time. These statistics are intended for general use for the space shuttle design trade-off analysis and are not to be used for specific design values.
Spatial heterogeneities and variability of karst hydro-system : insights from geophysics
NASA Astrophysics Data System (ADS)
Champollion, C.; Fores, B.; Lesparre, N.; Frederic, N.
2017-12-01
Heterogeneous systems such as karsts or fractured hydro-systems are challenging for both scientist and groundwater resources management. Karsts heterogeneities prevent the comparison and moreover the combination of data representative of different scales: borehole water level can generally not be used directly to interpret spring flow dynamic for example. The spatial heterogeneity has also an impact on the temporal variability of groundwater transfer and storage. Karst hydro-systems have characteristic non linear relation between precipitation amount and discharge at the outlets with threshold effects and a large variability of groundwater transit times In the presentation, geophysical field experiments conducted in karst hydro-system in the south of France are used to investigate groundwater transfer and storage variability at a scale of a few hundred meters. We focus on the added value of both geophysical time-lapse gravity experiments and 2D ERT imaging of the subsurface heterogeneities. Both gravity and ERT results can only be interpreted with large ambiguity or some strong a priori: the relation between resistivity and water content is not unique; almost no information about the processes can be inferred from the groundwater stock variations. The present study demonstrate how the ERT and gravity field experiments can be interpreted together in a coherent scheme with less ambiguity. First the geological and hydro-meteorological context is presented. Then the ERT field experiment including the processing and the results are detailed in the section about geophysical imaging of the heterogeneities. The gravity double difference (S2D) time-lapse experiment is described in the section about geophysical monitoring of the temporal variability. The following discussion demonstrate the impact of both experiments on the interpretation in terms of processes and heterogeneities.
Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration
NASA Technical Reports Server (NTRS)
Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas
1996-01-01
Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.
Mujalli, Randa Oqab; de Oña, Juan
2011-10-01
This study describes a method for reducing the number of variables frequently considered in modeling the severity of traffic accidents. The method's efficiency is assessed by constructing Bayesian networks (BN). It is based on a two stage selection process. Several variable selection algorithms, commonly used in data mining, are applied in order to select subsets of variables. BNs are built using the selected subsets and their performance is compared with the original BN (with all the variables) using five indicators. The BNs that improve the indicators' values are further analyzed for identifying the most significant variables (accident type, age, atmospheric factors, gender, lighting, number of injured, and occupant involved). A new BN is built using these variables, where the results of the indicators indicate, in most of the cases, a statistically significant improvement with respect to the original BN. It is possible to reduce the number of variables used to model traffic accidents injury severity through BNs without reducing the performance of the model. The study provides the safety analysts a methodology that could be used to minimize the number of variables used in order to determine efficiently the injury severity of traffic accidents without reducing the performance of the model. Copyright © 2011 Elsevier Ltd. All rights reserved.
Stability of wave processes in a rotating electrically conducting fluid
NASA Astrophysics Data System (ADS)
Peregudin, S. I.; Peregudina, E. S.; Kholodova, S. E.
2018-05-01
The paper puts forward a mathematical model of dynamics of spatial large-scale motions in a rotating layer of electrically conducting incompressible perfect fluid of variable depth with due account of dissipative effects. The resulting boundary-value problem is reduced to a vector system of partial differential equations for any values of the Reynolds number. Theoretical analysis of the so-obtained analytical solution reveals the effect of the magnetic field diffusion on the stability of the wave mode — namely, with the removed external magnetic field, the diffusion of the magnetic field promotes its damping. Besides, a criterion of stability of a wave mode is obtained.
Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)
1990-03-01
generation 15 3.1.2 Frame delivery . 15 3.2 Model artifices 16 3.3 Model variables 17 3.4 Simulation results 18 4. EXTERNAL PROCEDURES USED IN SIMULATION 19...46 15. Token Ring: Frame generation process 47 16. Token Ring: Frame delivery process 48 17 . Token Ring: Mean transfer delay vs mean throughput 49...assumed to be zero were replaced by the maximum values specified in the ANSI 802.3 standard (viz &MI=6, &M2=3, &M3= 17 , &D1=18, &D2=3, &D4=4, &D7=3, and
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
Spatial pattern analysis of Cu, Zn and Ni and their interpretation in the Campania region (Italy)
NASA Astrophysics Data System (ADS)
Petrik, Attila; Albanese, Stefano; Jordan, Gyozo; Rolandi, Roberto; De Vivo, Benedetto
2017-04-01
The uniquely abundant Campanian topsoil dataset enabled us to perform a spatial pattern analysis on 3 potentially toxic elements of Cu, Zn and Ni. This study is focusing on revealing the spatial texture and distribution of these elements by spatial point pattern and image processing analysis such as lineament density and spatial variability index calculation. The application of these methods on geochemical data provides a new and efficient tool to understand the spatial variation of concentrations and their background/baseline values. The determination and quantification of spatial variability is crucial to understand how fast the change in concentration is in a certain area and what processes might govern the variation. The spatial variability index calculation and image processing analysis including lineament density enables us to delineate homogenous areas and analyse them with respect to lithology and land use. Identification of spatial outliers and their patterns were also investigated by local spatial autocorrelation and image processing analysis including the determination of local minima and maxima points and singularity index analysis. The spatial variability of Cu and Zn reveals the highest zone (Cu: 0.5 MAD, Zn: 0.8-0.9 MAD, Median Deviation Index) along the coast between Campi Flegrei and the Sorrento Peninsula with the vast majority of statistically identified outliers and high-high spatial clustered points. The background/baseline maps of Cu and Zn reveals a moderate to high variability (Cu: 0.3 MAD, Zn: 0.4-0.5 MAD) NW-SE oriented zone including disrupted patches from Bisaccia to Mignano following the alluvial plains of Appenine's rivers. This zone has high abundance of anomaly concentrations identified using singularity analysis and it also has a high density of lineaments. The spatial variability of Ni shows the highest variability zone (0.6-0.7 MAD) around Campi Flegrei where the majority of low outliers are concentrated. The variability of background/baseline map of Ni reveals a shift to the east in case of highest variability zones coinciding with limestone outcrops. The high segmented area between Mignano and Bisaccia partially follows the alluvial plains of Appenine's rivers which seem to be playing a crucial role in the distribution and redistribution pattern of Cu, Zn and Ni in Campania. The high spatial variability zones of the later elements are located in topsoils on volcanoclastic rocks and are mostly related to cultivation and urbanised areas.
On measures of association among genetic variables
Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner
2012-01-01
Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500
NASA Astrophysics Data System (ADS)
Li, Yu; Giuliani, Matteo; Castelletti, Andrea
2016-04-01
Recent advances in modelling of coupled ocean-atmosphere dynamics significantly improved skills of long-term climate forecast from global circulation models (GCMs). These more accurate weather predictions are supposed to be a valuable support to farmers in optimizing farming operations (e.g. crop choice, cropping and watering time) and for more effectively coping with the adverse impacts of climate variability. Yet, assessing how actually valuable this information can be to a farmer is not straightforward and farmers' response must be taken into consideration. Indeed, in the context of agricultural systems potentially useful forecast information should alter stakeholders' expectation, modify their decisions, and ultimately produce an impact on their performance. Nevertheless, long-term forecast are mostly evaluated in terms of accuracy (i.e., forecast quality) by comparing hindcast and observed values and only few studies investigated the operational value of forecast looking at the gain of utility within the decision-making context, e.g. by considering the derivative of forecast information, such as simulated crop yields or simulated soil moisture, which are essential to farmers' decision-making process. In this study, we contribute a step further in the assessment of the operational value of long-term weather forecasts products by embedding these latter into farmers' behavioral models. This allows a more critical assessment of the forecast value mediated by the end-users' perspective, including farmers' risk attitudes and behavioral patterns. Specifically, we evaluate the operational value of thirteen state-of-the-art long-range forecast products against climatology forecast and empirical prediction (i.e. past year climate and historical average) within an integrated agronomic modeling framework embedding an implicit model of the farmers' decision-making process. Raw ensemble datasets are bias-corrected and downscaled using a stochastic weather generator, in order to address the mismatch of the spatio-temporal scale between forecast data from GCMs and our model. For each product, the experiment is composed by two cascade simulations: 1) an ex-ante simulation using forecast data, and 2) an ex-post simulation with observations. Multi-year simulations are performed to account for climate variability, and the operational value of the different forecast products is evaluated against the perfect foresight on the basis of expected crop productivity as well as the final decisions under different decision-making criterions. Our results show that not all products generate beneficial effects to farmers' performance, and the forecast errors might be amplified due to farmers' decision-making process and risk attitudes, yielding little or even worse performance compared with the empirical approaches.
Abundance of Chemical Elements in RR Lyrae Variables and their Kinematic Parameters
NASA Astrophysics Data System (ADS)
Gozha, M. L.; Marsakov, V. A.; Koval', V. V.
2018-03-01
A catalog of the chemical and spatial-kinematic parameters of 415 RR Lyrae variables (Lyrids) in the galactic field is compiled. Spectroscopic determinations of the relative abundances of 13 chemical elements in 101 of the RR Lyrae variables are collected from 25 papers published between 1995 and 2017. The data from different sources are reduced to a single solar abundance scale. The mean weighted chemical abundances are calculated with coefficients inversely proportional to the reported errors. An analysis of the deviations in the published relative abundances in each star from the mean square values calculated from them reveals an absence of systematic biases among the results from the various articles. The rectangular coordinates of 407 of the RR Lyrae variables and the components of the three-dimensional (3D) velocities of 401 of the stars are calculated using data from several sources. The collected data on the abundances of chemical elements produced by various nuclear fusion processes for the RR Lyrae variables of the field, as well as the calculated 3D velocities, can be used for studying the evolution of the Galaxy.
Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR
Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan
2010-01-01
Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.
Olsen, Mia B.; Wielandt, Daniel; Schiller, Martin; Van Kooten, Elishevah M.M.E.; Bizzarro, Martin
2016-01-01
We report on the petrology, magnesium isotopes and mass-independent 54Cr/52Cr compositions (μ54Cr) of 42 chondrules from CV (Vigarano and NWA 3118) and CR (NWA 6043, NWA 801 and LAP 02342) chondrites. All sampled chondrules are classified as type IA or type IAB, have low 27Al/24Mg ratios (0.04–0.27) and display little or no evidence for secondary alteration processes. The CV and CR chondrules show variable 25Mg/24Mg and 26Mg/24Mg values corresponding to a range of mass-dependent fractionation of ~500 ppm (parts per million) per atomic mass unit. This mass-dependent Mg isotope fractionation is interpreted as reflecting Mg isotope heterogeneity of the chondrule precursors and not the result of secondary alteration or volatility-controlled processes during chondrule formation. The CV and CR chondrule populations studied here are characterized by systematic deficits in the mass-independent component of 26Mg (μ26Mg*) relative to the solar value defined by CI chondrites, which we interpret as reflecting formation from precursor material with a reduced initial abundance of 26Al compared to the canonical 26Al/27Al of ~5 × 10−5. Model initial 26Al/27Al values of CV and CR chondrules vary from (1.5 ± 4.0) × 10−6 to (2.2 ± 0.4) × 10−5. The CV chondrules display significant μ54Cr variability, defining a range of compositions that is comparable to that observed for inner Solar System primitive and differentiated meteorites. In contrast, CR chondrites are characterized by a narrower range of μ54Cr values restricted to compositions typically observed for bulk carbonaceous chondrites. Collectively, these observations suggest that the CV chondrules formed from precursors that originated in various regions of the protoplanetary disk and were then transported to the accretion region of the CV parent asteroid whereas CR chondrule predominantly formed from precursor with carbonaceous chondrite-like μ54Cr signatures. The observed μ54Cr variability in chondrules from CV and CR chondrites suggest that the matrix and chondrules did not necessarily formed from the same reservoir. The coupled μ26Mg* and μ54Cr systematics of CR chondrules establishes that these objects formed from a thermally unprocessed and 26Al-poor source reservoir distinct from most inner Solar System asteroids and planetary bodies, possibly located beyond the orbits of the gas giants. In contrast, a large fraction of the CV chondrules plot on the inner Solar System correlation line, indicating that these objects predominantly formed from thermally-processed, 26Al-bearing precursor material akin to that of inner Solar System solids, asteroids and planets. PMID:27563152
NASA Astrophysics Data System (ADS)
Olsen, Mia B.; Wielandt, Daniel; Schiller, Martin; Van Kooten, Elishevah M. M. E.; Bizzarro, Martin
2016-10-01
We report on the petrology, magnesium isotopes and mass-independent 54Cr/52Cr compositions (μ54Cr) of 42 chondrules from CV (Vigarano and NWA 3118) and CR (NWA 6043, NWA 801 and LAP 02342) chondrites. All sampled chondrules are classified as type IA or type IAB, have low 27Al/24Mg ratios (0.04-0.27) and display little or no evidence for secondary alteration processes. The CV and CR chondrules show variable 25Mg/24Mg and 26Mg/24Mg values corresponding to a range of mass-dependent fractionation of ∼500 ppm (parts per million) per atomic mass unit. This mass-dependent Mg isotope fractionation is interpreted as reflecting Mg isotope heterogeneity of the chondrule precursors and not the result of secondary alteration or volatility-controlled processes during chondrule formation. The CV and CR chondrule populations studied here are characterized by systematic deficits in the mass-independent component of 26Mg (μ26Mg∗) relative to the solar value defined by CI chondrites, which we interpret as reflecting formation from precursor material with a reduced initial abundance of 26Al compared to the canonical 26Al/27Al of ∼5 × 10-5. Model initial 26Al/27Al values of CV and CR chondrules vary from (1.5 ± 4.0) × 10-6 to (2.2 ± 0.4) × 10-5. The CV chondrules display significant μ54Cr variability, defining a range of compositions that is comparable to that observed for inner Solar System primitive and differentiated meteorites. In contrast, CR chondrites are characterized by a narrower range of μ54Cr values restricted to compositions typically observed for bulk carbonaceous chondrites. Collectively, these observations suggest that the CV chondrules formed from precursors that originated in various regions of the protoplanetary disk and were then transported to the accretion region of the CV parent asteroid whereas CR chondrule predominantly formed from precursor with carbonaceous chondrite-like μ54Cr signatures. The observed μ54Cr variability in chondrules from CV and CR chondrites suggest that the matrix and chondrules did not necessarily formed from the same reservoir. The coupled μ26Mg∗ and μ54Cr systematics of CR chondrules establishes that these objects formed from a thermally unprocessed and 26Al-poor source reservoir distinct from most inner Solar System asteroids and planetary bodies, possibly located beyond the orbits of the gas giants. In contrast, a large fraction of the CV chondrules plot on the inner Solar System correlation line, indicating that these objects predominantly formed from thermally-processed, 26Al-bearing precursor material akin to that of inner Solar System solids, asteroids and planets.
Carbonate system variability in the Gulf of Trieste (North Adriatic Sea)
NASA Astrophysics Data System (ADS)
Cantoni, Carolina; Luchetta, Anna; Celio, Massimo; Cozzi, Stefano; Raicich, Fabio; Catalano, Giulio
2012-12-01
The seasonal variability of the carbonate system in the waters of the Gulf of Trieste (GoT) was studied at PALOMA station from 2008 to 2009, in order to highlight the effects of biological processes, meteorological forcings and river loads on the dynamics of pHT, CO2 partial pressure (pCO2), dissolved inorganic carbon (DIC), carbonate ion concentration (CO3=), aragonite saturation state (ΩAr) and total alkalinity (AT). During winter, low seawater temperature (9.0 ± 0.4 °C) and a weak biological activity (-10.7 < AOU < 15.7 μmol O2 kg-1) in a homogeneous water column led to the lowest average values of pCO2 (328 ± 19 μatm) and ΩAr (2.91 ± 0.14). In summer, the water column in the area acted as a two-layer system, with production processes prevailing in the upper layer (average AOU = -29.3 μmol O2 kg-1) and respiration processes in the lower layer (average AOU = 26.8 μmol O2 kg-1). These conditions caused the decrease of DIC (50 μmol kg-1) and the increase of ΩAr (1.0) values in the upper layer, whereas opposite trends were observed in the bottom waters. In August 2008, during a hypoxic event (dissolved oxygen DO = 86.9 μmol O2 kg-1), the intense remineralisation of organic carbon caused the rise of pCO2 (1043 μatm) and the decreases of pHT and ΩAr values down to 7.732 and 1.79 respectively. On an annual basis, surface pCO2 was mainly regulated by the pronounced seasonal cycle of seawater temperature. In winter, surface waters in the GoT were under-saturated with respect to atmospheric CO2, thus acting as a sink of CO2, in particular when strong-wind events enhanced air-sea gas exchange (FCO2 up to -11.9 mmol m-2 d-1). During summer, the temperature-driven increase of pCO2 was dampened by biological CO2 uptake, as consequence a slight over-saturation (pCO2 = 409 μatm) turned out. River plumes were generally associated to higher AT and pCO2 values (up to 2859 μmol kg-1 and 606 μatm respectively), but their effect was highly variable in space and time. During winter, the ambient conditions that favour the formation of dense waters on this continental shelf, also favour a high absorption of CO2 in seawater and its consequent acidification (pHT decrease of -0.006 units during a 7-day Bora wind event). This finding indicates a high vulnerability of North Adriatic Dense Water to atmospheric CO2 increase and ocean acidification process.
NASA Astrophysics Data System (ADS)
Hastings, D. W.
2012-12-01
How can we effectively teach undergraduates the fundamentals of physical, chemical and biological processes in the ocean? Understanding physical circulation and biogeochemical processes is essential, yet it can be difficult for an undergraduate to easily grasp important concepts such as using temperature and salinity as conservative tracers, nutrient distribution, ageing of water masses, and thermocline variability. Like many other topics, it is best learned not in a lecture setting, but working with real data: plotting values, making predictions, and making mistakes. Part I: Using temperature and salinity values from any location in the world ocean (World Ocean Atlas), combined with an excellent user interface (http://ferret.pmel.noaa.gov), students are asked to answer a series of specific questions related to ocean circulation. Using established temperature and salinity values to characterize different water masses, students are able to identify various water masses and gain insight to physical circulation processes. Questions related to ocean circulation include: How far south and at what depth does NADW extend into the S. Atlantic? Is deep water formed in the North Pacific? How and why does the depth of the thermocline vary with latitude in the Atlantic Ocean? How deep does the Mediterranean Water descend as it leaves the Straits of Gibraltar? How far into the Atlantic can you see the influence of the Amazon River? Is there any Antarctic Bottom Water in the North Pacific? Collaborating with another student typically leads to increased engagement. Especially in large lecture settings, where one teacher is not able to address student questions or concerns, working in pairs or in groups of three is best. Part II: Using the same web-based viewer and data set students are subsequently assigned one oceanic property (phosphate, nitrate, silicate, O2, or AOU) and asked to construct three different plots: 1) vertical depth profile at one location; 2) latitude vs. depth at 20°W; and 3) a latitude vs. longitude at 4,000 m depth in the entire ocean. Students do this work at home, and come to class prepared with hypotheses that explain variations of their variable observed in their figures. Nutrients, for example, are typically depleted in the surface ocean, increase at intermediate depths, and then typically decrease in deep water. How do oceanic processes drive these variations? In the context of the other variables, and with the help of other group members, they typically develop an understanding of surface productivity, respiration of organic matter in deeper waters, upwelling of deeper water, ocean circulation, insolation, evaporation, precipitation, and temperature dependence of gas solubility. Students then prepare a written explanation to accompany the plots. Cartoon-like depictions of nutrient profiles typically presented in introductory texts have their place, but they lack the complexity inherent in real data. The objective is to mimic the excitement of discovery and the challenge of developing a hypothesis to explain existing data. The ability to develop viable hypotheses to explain real data with real variability are what motivate and inspire many scientists. How can we expect to motivate and inspire students with lackluster descriptions of ocean processes?
Konova, Anna B.; Moeller, Scott J.; Tomasi, Dardo; Parvaz, Muhammad A.; Alia-Klein, Nelly; Volkow, Nora D.; Goldstein, Rita Z.
2012-01-01
Abnormalities in frontostriatal systems are thought to be central to the pathophysiology of addiction, and may underlie maladaptive processing of the highly generalizable reinforcer, money. Although abnormal frontostriatal structure and function have been observed in individuals addicted to cocaine, it is less clear how individual variability in brain structure is associated with brain function to influence behavior. Our objective was to examine frontostriatal structure and neural processing of money value in chronic cocaine users and closely matched healthy controls. A reward task that manipulated different levels of money was used to isolate neural activity associated with money value. Gray matter volume measures were used to assess frontostriatal structure. Our results indicated that cocaine users had an abnormal money value signal in the sensorimotor striatum (right putamen/globus pallidus) which was negatively associated with accuracy adjustments to money and was more pronounced in individuals with more severe use. In parallel, group differences were also observed in both function and gray matter volume of the ventromedial prefrontal cortex; in the cocaine users, the former was directly associated with response to money in the striatum. These results provide strong evidence for abnormalities in the neural mechanisms of valuation in addiction and link these functional abnormalities with deficits in brain structure. In addition, as value signals represent acquired associations, their abnormal processing in the sensorimotor striatum, a region centrally implicated in habit formation, could signal disadvantageous associative learning in cocaine addiction. PMID:22775285
Prospective comparison of speckle tracking longitudinal bidimensional strain between two vendors.
Castel, Anne-Laure; Szymanski, Catherine; Delelis, François; Levy, Franck; Menet, Aymeric; Mailliet, Amandine; Marotte, Nathalie; Graux, Pierre; Tribouilloy, Christophe; Maréchaux, Sylvestre
2014-02-01
Speckle tracking is a relatively new, largely angle-independent technique used for the evaluation of myocardial longitudinal strain (LS). However, significant differences have been reported between LS values obtained by speckle tracking with the first generation of software products. To compare LS values obtained with the most recently released equipment from two manufacturers. Systematic scanning with head-to-head acquisition with no modification of the patient's position was performed in 64 patients with equipment from two different manufacturers, with subsequent off-line post-processing for speckle tracking LS assessment (Philips QLAB 9.0 and General Electric [GE] EchoPAC BT12). The interobserver variability of each software product was tested on a randomly selected set of 20 echocardiograms from the study population. GE and Philips interobserver coefficients of variation (CVs) for global LS (GLS) were 6.63% and 5.87%, respectively, indicating good reproducibility. Reproducibility was very variable for regional and segmental LS values, with CVs ranging from 7.58% to 49.21% with both software products. The concordance correlation coefficient (CCC) between GLS values was high at 0.95, indicating substantial agreement between the two methods. While good agreement was observed between midwall and apical regional strains with the two software products, basal regional strains were poorly correlated. The agreement between the two software products at a segmental level was very variable; the highest correlation was obtained for the apical cap (CCC 0.90) and the poorest for basal segments (CCC range 0.31-0.56). A high level of agreement and reproducibility for global but not for basal regional or segmental LS was found with two vendor-dependent software products. This finding may help to reinforce clinical acceptance of GLS in everyday clinical practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
Continental-scale quantification of landscape values using social media data.
van Zanten, Boris T; Van Berkel, Derek B; Meentemeyer, Ross K; Smith, Jordan W; Tieskens, Koen F; Verburg, Peter H
2016-11-15
Individuals, communities, and societies ascribe a diverse array of values to landscapes. These values are shaped by the aesthetic, cultural, and recreational benefits and services provided by those landscapes. However, across the globe, processes such as urbanization, agricultural intensification, and abandonment are threatening landscape integrity, altering the personally meaningful connections people have toward specific places. Existing methods used to study landscape values, such as social surveys, are poorly suited to capture dynamic landscape-scale processes across large geographic extents. Social media data, by comparison, can be used to indirectly measure and identify valuable features of landscapes at a regional, continental, and perhaps even worldwide scale. We evaluate the usefulness of different social media platforms-Panoramio, Flickr, and Instagram-and quantify landscape values at a continental scale. We find Panoramio, Flickr, and Instagram data can be used to quantify landscape values, with features of Instagram being especially suitable due to its relatively large population of users and its functional ability of allowing users to attach personally meaningful comments and hashtags to their uploaded images. Although Panoramio, Flickr, and Instagram have different user profiles, our analysis revealed similar patterns of landscape values across Europe across the three platforms. We also found variables describing accessibility, population density, income, mountainous terrain, or proximity to water explained a significant portion of observed variation across data from the different platforms. Social media data can be used to extend our understanding of how and where individuals ascribe value to landscapes across diverse social, political, and ecological boundaries.
Continental-scale quantification of landscape values using social media data
van Zanten, Boris T.; Van Berkel, Derek B.; Meentemeyer, Ross K.; Smith, Jordan W.; Tieskens, Koen F.
2016-01-01
Individuals, communities, and societies ascribe a diverse array of values to landscapes. These values are shaped by the aesthetic, cultural, and recreational benefits and services provided by those landscapes. However, across the globe, processes such as urbanization, agricultural intensification, and abandonment are threatening landscape integrity, altering the personally meaningful connections people have toward specific places. Existing methods used to study landscape values, such as social surveys, are poorly suited to capture dynamic landscape-scale processes across large geographic extents. Social media data, by comparison, can be used to indirectly measure and identify valuable features of landscapes at a regional, continental, and perhaps even worldwide scale. We evaluate the usefulness of different social media platforms—Panoramio, Flickr, and Instagram—and quantify landscape values at a continental scale. We find Panoramio, Flickr, and Instagram data can be used to quantify landscape values, with features of Instagram being especially suitable due to its relatively large population of users and its functional ability of allowing users to attach personally meaningful comments and hashtags to their uploaded images. Although Panoramio, Flickr, and Instagram have different user profiles, our analysis revealed similar patterns of landscape values across Europe across the three platforms. We also found variables describing accessibility, population density, income, mountainous terrain, or proximity to water explained a significant portion of observed variation across data from the different platforms. Social media data can be used to extend our understanding of how and where individuals ascribe value to landscapes across diverse social, political, and ecological boundaries. PMID:27799537
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value.
Bishop, Dorothy V M; Thompson, Paul A
2016-01-01
Background. The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods. p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results. We show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions. The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert
2018-05-03
Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.
Interexaminer variation of minutia markup on latent fingerprints.
Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn
2016-07-01
Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.
Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa
2008-01-01
This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.
Decision theory for computing variable and value ordering decisions for scheduling problems
NASA Technical Reports Server (NTRS)
Linden, Theodore A.
1993-01-01
Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.
Intelligent Performance Analysis with a Natural Language Interface
NASA Astrophysics Data System (ADS)
Juuso, Esko K.
2017-09-01
Performance improvement is taken as the primary goal in the asset management. Advanced data analysis is needed to efficiently integrate condition monitoring data into the operation and maintenance. Intelligent stress and condition indices have been developed for control and condition monitoring by combining generalized norms with efficient nonlinear scaling. These nonlinear scaling methodologies can also be used to handle performance measures used for management since management oriented indicators can be presented in the same scale as intelligent condition and stress indices. Performance indicators are responses of the process, machine or system to the stress contributions analyzed from process and condition monitoring data. Scaled values are directly used in intelligent temporal analysis to calculate fluctuations and trends. All these methodologies can be used in prognostics and fatigue prediction. The meanings of the variables are beneficial in extracting expert knowledge and representing information in natural language. The idea of dividing the problems into the variable specific meanings and the directions of interactions provides various improvements for performance monitoring and decision making. The integrated temporal analysis and uncertainty processing facilitates the efficient use of domain expertise. Measurements can be monitored with generalized statistical process control (GSPC) based on the same scaling functions.
Ihle, Andreas; Ghisletta, Paolo; Kliegel, Matthias
2017-03-01
To contribute to the ongoing conceptual debate of what traditional mean-level ongoing task (OT) costs tell us about the attentional processes underlying prospective memory (PM), we investigated costs to intraindividual variability (IIV) in OT response times as a potentially sensitive indicator of attentional processes. Particularly, we tested whether IIV in OT responses may reflect controlled employment of attentional processes versus lapses of controlled attention, whether these processes differ across adulthood, and whether it is moderated by cue focality. We assessed 150 individuals (19-82 years) in a focal and a nonfocal PM condition. In addition, external measures of inhibition and working memory were assessed. In line with the predictions of the lapses-of-attention/inefficient-executive-control account, our data support the view that costs to IIV in OT trials of PM tasks reflect fluctuations in the efficiency of executive functioning, which was related to failures in prospective remembering, particularly in nonfocal PM tasks, potentially due to their increased executive demands. The additional value of considering costs to IIV over and beyond traditional mean-level OT costs in PM research is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.
2009-07-17
The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight intomore » complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.« less