Sample records for relevant processing parameters

  1. Finding Relevant Parameters for the Thin-film Photovoltaic Cells Production Process with the Application of Data Mining Methods.

    PubMed

    Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas

    2017-09-01

    A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  3. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  4. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.

    PubMed

    Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin

    2010-04-16

    Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.

  5. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS

    PubMed Central

    2010-01-01

    Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504

  6. Application of all-relevant feature selection for the failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.

    2016-03-01

    Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.

  7. Laser Ablatin of Dental Hard Tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seka, W.; Rechmann, P.; Featherstone, J.D.B.

    This paper discusses ablation of dental hard tissue using pulsed lasers. It focuses particularly on the relevant tissue and laser parameters and some of the basic ablation processes that are likely to occur. The importance of interstitial water and its phase transitions is discussed in some detail along with the ablation processes that may or may not directly involve water. The interplay between tissue parameters and laser parameters in the outcome of the removal of dental hard tissue is discussed in detail.

  8. The Use of Logistics n the Quality Parameters Control System of Material Flow

    ERIC Educational Resources Information Center

    Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.

    2016-01-01

    The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…

  9. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  10. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  11. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGES

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  12. Influence of in line monitored fluid bed granulation process parameters on the stability of Ethinylestradiol.

    PubMed

    Roßteuscher-Carl, Katrin; Fricke, Sabine; Hacker, Michael C; Schulz-Siegmund, Michaela

    2015-12-30

    Ethinylestradiol (EE) as a highly active and low dosed compound is prone to oxidative degradation. The stability of the drug substance is therefore a critical parameter that has to be considered during drug formulation. Beside the stability of the drug substance, granule particle size and moisture are critical quality attributes (CQA) of the fluid bed granulation process which influence the tableting ability of the resulting granules. Both CQA should therefore be monitored during the production process by process analytic technology (PAT) according to ICH Q8. This work focusses on the effects of drying conditions on the stability of EE in a fluid-bed granulation process. We quantified EE degradation products 6-alpha-hydroxy-EE, 6-beta-hydroxy-EE, 9(11)-dehydro-EE and 6-oxo-EE during long time storage and accelerated conditions. PAT-tools that monitor granule particle size (Spatial filtering technology) and granule moisture (Microwave resonance technology) were applied and compared with off-line methods. We found a relevant influence of residual granule moisture and thermic stress applied during granulation on the storage stability of EE, whereas no degradation was found immediately after processing. Hence we conclude that drying parameters have a relevant influence on long term EE stability. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  14. Multi-parameter comparison of a standardized mixed meal tolerance test in healthy and type 2 diabetic subjects: the PhenFlex challenge.

    PubMed

    Wopereis, Suzan; Stroeve, Johanna H M; Stafleu, Annette; Bakker, Gertruud C M; Burggraaf, Jacobus; van Erk, Marjan J; Pellis, Linette; Boessen, Ruud; Kardinaal, Alwine A F; van Ommen, Ben

    2017-01-01

    A key feature of metabolic health is the ability to adapt upon dietary perturbations. Recently, it was shown that metabolic challenge tests in combination with the new generation biomarkers allow the simultaneous quantification of major metabolic health processes. Currently, applied challenge tests are largely non-standardized. A systematic review defined an optimal nutritional challenge test, the "PhenFlex test" (PFT). This study aimed to prove that PFT modulates all relevant processes governing metabolic health thereby allowing to distinguish subjects with different metabolic health status. Therefore, 20 healthy and 20 type 2 diabetic (T2D) male subjects were challenged both by PFT and oral glucose tolerance test (OGTT). During the 8-h response time course, 132 parameters were quantified that report on 26 metabolic processes distributed over 7 organs (gut, liver, adipose, pancreas, vasculature, muscle, kidney) and systemic stress. In healthy subjects, 110 of the 132 parameters showed a time course response. Patients with T2D showed 18 parameters to be significantly different after overnight fasting compared to healthy subjects, while 58 parameters were different in the post-challenge time course after the PFT. This demonstrates the added value of PFT in distinguishing subjects with different health status. The OGTT and PFT response was highly comparable for glucose metabolism as identical amounts of glucose were present in both challenge tests. Yet the PFT reports on additional processes, including vasculature, systemic stress, and metabolic flexibility. The PFT enables the quantification of all relevant metabolic processes involved in maintaining or regaining homeostasis of metabolic health. Studying both healthy subjects and subjects with impaired metabolic health showed that the PFT revealed new processes laying underneath health. This study provides the first evidence towards adopting the PFT as gold standard in nutrition research.

  15. Electrocoagulation of wastewater from almond industry.

    PubMed

    Valero, David; Ortiz, Juan M; García, Vicente; Expósito, Eduardo; Montiel, Vicente; Aldaz, Antonio

    2011-08-01

    This work was carried out to study the treatment of almond industry wastewater by the electrocoagulation process. First of all, laboratory scale experiments were conducted in order to determine the effects of relevant wastewater characteristics such as conductivity and pH, as well as the process variables such as anode material, current density and operating time on the removal efficiencies of the total organic carbon (TOC) and the most representative analytical parameters. Next, the wastewater treatment process was scaled up to pre-industrial size using the best experimental conditions and parameters obtained at laboratory scale. Finally, economic parameters such as chemicals, energy consumption and sludge generation have been discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.

    PubMed

    Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F

    2013-10-01

    A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).

  17. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  18. Method for extracting relevant electrical parameters from graphene field-effect transistors using a physical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boscá, A., E-mail: alberto.bosca@upm.es; Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040; Pedrós, J.

    2015-01-28

    Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method outputmore » values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.« less

  19. Assessment of Electronic Banking Service's Impact on the Economic Parameters of the Bank Activity

    ERIC Educational Resources Information Center

    Kiselev, Sergey V.; Chernyavskaya, Yana S.; Bardasova, Eleonora V.; Galeeva, Gulnaz M.; Fazlieva, Elena P.; Krokhina, Julia A.

    2016-01-01

    The relevance of the study: The relevance of the research problem is conditioned by the intensification of innovative processes in modern economy and in the banking sector, in particular, as one of the most sensitive areas for innovation and innovative types of services and information and communication innovations today is one of the major…

  20. Parameter identification of process simulation models as a means for knowledge acquisition and technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Ifanti, Konstantina

    2012-12-01

    Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.

  1. FUB at TREC 2008 Relevance Feedback Track: Extending Rocchio with Distributional Term Analysis

    DTIC Science & Technology

    2008-11-01

    starting point is the improved version [ Salton and Buckley 1990] of the original Rocchio’s formula [Rocchio 1971]: newQ = α ⋅ origQ + β R r r∈R ∑ − γR...earlier studies about the low effect of the main relevance feedback parameters on retrieval performance (e.g., Salton and Buckley 1990), while they seem...Relevance feedback in information retrieval. In The SMART retrieval system - experiments in automatic document processing, Salton , G., Ed., Prentice Hall

  2. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  3. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  4. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  5. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  6. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  7. An Introduction to Data Analysis in Asteroseismology

    NASA Astrophysics Data System (ADS)

    Campante, Tiago L.

    A practical guide is presented to some of the main data analysis concepts and techniques employed contemporarily in the asteroseismic study of stars exhibiting solar-like oscillations. The subjects of digital signal processing and spectral analysis are introduced first. These concern the acquisition of continuous physical signals to be subsequently digitally analyzed. A number of specific concepts and techniques relevant to asteroseismology are then presented as we follow the typical workflow of the data analysis process, namely, the extraction of global asteroseismic parameters and individual mode parameters (also known as peak-bagging) from the oscillation spectrum.

  8. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  9. Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters

    NASA Astrophysics Data System (ADS)

    Royev, B.; Vinokur, A.; Kulikov, G.

    2018-04-01

    Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.

  10. PLAN-TA9-2443(U), Rev. B Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Geoffrey Wayne

    2016-03-16

    This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less

  11. Supercritical-Multiple-Solvent Extraction From Coal

    NASA Technical Reports Server (NTRS)

    Corcoran, W.; Fong, W.; Pichaichanarong, P.; Chan, P.; Lawson, D.

    1983-01-01

    Large and small molecules dissolve different constituents. Experimental apparatus used to test supercritical extraction of hydrogen rich compounds from coal in various organic solvents. In decreasing order of importance, relevant process parameters were found to be temperature, solvent type, pressure, and residence time.

  12. Comprehensive analysis of line-edge and line-width roughness for EUV lithography

    NASA Astrophysics Data System (ADS)

    Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai

    2017-03-01

    Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.

  13. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted. Previous studies show that, especially with radiometric measurements, the two-dimensional spatial variability of the nature of the process-relevant soil, close to the surface can be determined. In addition, the electromagnetic measurements are more important to obtain three-dimensional information of the deeper geological conditions and to improve the area-specific geological knowledge and understanding. The validation of these measurements is done with terrestrial geoelectrical measurements. So both aspects, radiometric and electromagnetic measurements, are important and subsequently, interpretation of the geophysical results can be used as the parameter maps in the modeling of more realistic susceptibility maps with respect to various processes. Within this presentation, results of geophysical measurements, the outcome and the derived parameter maps, as well as first process-oriented susceptibility maps in terms of gravitational soil mass movements will be presented. As an example results which were obtained with a heuristic method in an area in Vorarlberg (Western Austria) will be shown. References: Schwarz, L. & Tilch, N. (2011): Why are good process data so important for the modelling of landslide susceptibility maps?- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6), Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_schwarz_tilch_1.pdf] Tilch, N. & Schwarz, L. (2011): Spatial and scale-dependent variability in data quality and their influence on susceptibility maps for gravitational mass movements in soil, modelled by heuristic method.- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6); Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_tilch_schwarz.pdf

  14. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    PubMed

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  15. Theoretical and experimental studies in ultraviolet solar physics

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Reeves, E. M.

    1975-01-01

    The processes and parameters in atomic and molecular physics that are relevant to solar physics are investigated. The areas covered include: (1) measurement of atomic and molecular parameters that contribute to discrete and continous sources of opacity and abundance determinations in the sun; (2) line broadening and scattering phenomena; and (3) development of an ion beam spectroscopic source which is used for the measurement of electron excitation cross sections of transition region and coronal ions.

  16. Updated MDRIZTAB Parameters for ACS/WFC

    NASA Astrophysics Data System (ADS)

    Hoffman, S. L.; Avila, R. J.

    2017-03-01

    The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.

  17. Prevalence of bacteria resistant to antibiotics and/or biocides on meat processing plant surfaces throughout meat chain production.

    PubMed

    Lavilla Lerma, Leyre; Benomar, Nabil; Gálvez, Antonio; Abriouel, Hikmate

    2013-02-01

    In order to investigate the prevalence of resistant bacteria to biocides and/or antibiotics throughout meat chain production from sacrifice till end of production line, samples from various surfaces of a goat and lamb slaughterhouse representative of the region were analyzed by the culture dependent approach. Resistant Psychrotrophs (n=255 strains), Pseudomonas sp. (n=166 strains), E. coli (n=23 strains), Staphylococcus sp. (n=17 strains) and LAB (n=82 represented mainly by Lactobacillus sp.) were isolated. Resistant psychrotrophs and pseudomonads (47 and 29%, respectively) to different antimicrobials were frequently detected in almost all areas of meat processing plant regardless the antimicrobial used, although there was a clear shift in the spectrum of other bacterial groups and for this aim such resistance was determined according to several parameters: antimicrobial tested, sampling zone and the bacterial group. Correlation of different parameters was done using a statistical tool "Principal component analysis" (PCA) which determined that quaternary ammonium compounds and hexadecylpyridinium were the most relevant biocides for resistance in Pseudomonas sp., while ciprofloxacin and hexachlorophene were more relevant for psychrotrophs, LAB, and in lesser extent Staphylococcus sp. and Escherichia coli. On the other hand, PCA of sampling zones determined that sacrifice room (SR) and cutting room (CR) considered as main source of antibiotic and/or biocide resistant bacteria showed an opposite behaviour concerning relevance of antimicrobials to determine resistance being hexadecylpyridinium, cetrimide and chlorhexidine the most relevant in CR, while hexachlorophene, oxonia 6P and PHMG the most relevant in SR. In conclusion, rotational use of the relevant biocides as disinfectants in CR and SR is recommended in an environment which is frequently disinfected. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  19. Properties of pellets manufactured by wet extrusion/spheronization process using kappa-carrageenan: effect of process parameters.

    PubMed

    Thommes, Markus; Kleinebudde, Peter

    2007-11-09

    The aim of this study was to systematically evaluate the pelletization process parameters of kappa-carrageenan-containing formulations. The study dealt with the effect of 4 process parameters--screw speed, number of die holes, friction plate speed, and spheronizer temperature--on the pellet properties of shape, size, size distribution, tensile strength, and drug release. These parameters were varied systematically in a 2(4) full factorial design. In addition, 4 drugs--phenacetin, chloramphenicol, dimenhydrinate, and lidocaine hydrochloride--were investigated under constant process conditions. The most spherical pellets were achieved in a high yield by using a large number of die holes and a high spheronizer speed. There was no relevant influence of the investigated process parameters on the size distribution, mechanical stability, and drug release. The poorly soluble drugs, phenacetin and chloramphenicol, resulted in pellets with adequate shape, size, and tensile strength and a fast drug release. The salts of dimenhydrinate and lidocaine affected pellet shape, mechanical stability, and the drug release properties using an aqueous solution of pH 3 as a granulation liquid. In the case of dimenhydrinate, this was attributed to the ionic interactions with kappa-carrageenan, resulting in a stable matrix during dissolution that did not disintegrate. The effect of lidocaine is comparable to the effect of sodium ions, which suppress the gelling of carrageenan, resulting in pellets with fast disintegration and drug release characteristics. The pellet properties are affected by the process parameters and the active pharmaceutical ingredient used.

  20. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  1. Process parameter dependent growth phenomena of naproxen nanosuspension manufactured by wet media milling.

    PubMed

    Bitterlich, A; Laabs, C; Krautstrunk, I; Dengler, M; Juhnke, M; Grandeury, A; Bunjes, H; Kwade, A

    2015-05-01

    The production of nanosuspensions has proved to be an effective method for overcoming bioavailability challenges of poorly water soluble drugs. Wet milling in stirred media mills and planetary ball mills has become an established top-down-method for producing such drug nanosuspensions. The quality of the resulting nanosuspension is determined by the stability against agglomeration on the one hand, and the process parameters of the mill on the other hand. In order to understand the occurring dependencies, a detailed screening study, not only on adequate stabilizers, but also on their optimum concentration was carried out for the active pharmaceutical ingredient (API) naproxen in a planetary ball mill. The type and concentration of the stabilizer had a pronounced influence on the minimum particle size obtained. With the best formulation the influence of the relevant process parameters on product quality was investigated to determine the grinding limit of naproxen. Besides the well known phenomenon of particle agglomeration, actual naproxen crystal growth and morphology alterations occurred during the process which has not been observed before. It was shown that, by adjusting the process parameters, those effects could be reduced or eliminated. Thus, besides real grinding and agglomeration a process parameter dependent ripening of the naproxen particles was identified to be a concurrent effect during the naproxen fine grinding process. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  3. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  4. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  5. Calculation tool for transported geothermal energy using two-step absorption process

    DOE Data Explorer

    Kyle Gluesenkamp

    2016-02-01

    This spreadsheet allows the user to calculate parameters relevant to techno-economic performance of a two-step absorption process to transport low temperature geothermal heat some distance (1-20 miles) for use in building air conditioning. The parameters included are (1) energy density of aqueous LiBr and LiCl solutions, (2) transportation cost of trucking solution, and (3) equipment cost for the required chillers and cooling towers in the two-step absorption approach. More information is available in the included public report: "A Technical and Economic Analysis of an Innovative Two-Step Absorption System for Utilizing Low-Temperature Geothermal Resources to Condition Commercial Buildings"

  6. Diversification versus specialization in complex ecosystems.

    PubMed

    Di Clemente, Riccardo; Chiarotti, Guido L; Cristelli, Matthieu; Tacchella, Andrea; Pietronero, Luciano

    2014-01-01

    By analyzing the distribution of revenues across the production sectors of quoted firms we suggest a novel dimension that drives the firms diversification process at country level. Data show a non trivial macro regional clustering of the diversification process, which underlines the relevance of geopolitical environments in determining the microscopic dynamics of economic entities. These findings demonstrate the possibility of singling out in complex ecosystems those micro-features that emerge at macro-levels, which could be of particular relevance for decision-makers in selecting the appropriate parameters to be acted upon in order to achieve desirable results. The understanding of this micro-macro information exchange is further deepened through the introduction of a simplified dynamic model.

  7. Diversification versus Specialization in Complex Ecosystems

    PubMed Central

    Di Clemente, Riccardo; Chiarotti, Guido L.; Cristelli, Matthieu; Tacchella, Andrea; Pietronero, Luciano

    2014-01-01

    By analyzing the distribution of revenues across the production sectors of quoted firms we suggest a novel dimension that drives the firms diversification process at country level. Data show a non trivial macro regional clustering of the diversification process, which underlines the relevance of geopolitical environments in determining the microscopic dynamics of economic entities. These findings demonstrate the possibility of singling out in complex ecosystems those micro-features that emerge at macro-levels, which could be of particular relevance for decision-makers in selecting the appropriate parameters to be acted upon in order to achieve desirable results. The understanding of this micro-macro information exchange is further deepened through the introduction of a simplified dynamic model. PMID:25384059

  8. Stimulus Sensitivity of a Spiking Neural Network Model

    NASA Astrophysics Data System (ADS)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  9. Ocean acidification in the coastal zone from an organism's perspective: multiple system parameters, frequency domains, and habitats.

    PubMed

    Waldbusser, George G; Salisbury, Joseph E

    2014-01-01

    Multiple natural and anthropogenic processes alter the carbonate chemistry of the coastal zone in ways that either exacerbate or mitigate ocean acidification effects. Freshwater inputs and multiple acid-base reactions change carbonate chemistry conditions, sometimes synergistically. The shallow nature of these systems results in strong benthic-pelagic coupling, and marine invertebrates at different life history stages rely on both benthic and pelagic habitats. Carbonate chemistry in coastal systems can be highly variable, responding to processes with temporal modes ranging from seconds to centuries. Identifying scales of variability relevant to levels of biological organization requires a fuller characterization of both the frequency and magnitude domains of processes contributing to or reducing acidification in pelagic and benthic habitats. We review the processes that contribute to coastal acidification with attention to timescales of variability and habitats relevant to marine bivalves.

  10. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  11. Coffee husk composting: An investigation of the process using molecular and non-molecular tools

    PubMed Central

    Shemekite, Fekadu; Gómez-Brandón, María; Franke-Whittle, Ingrid H.; Praehauser, Barbara; Insam, Heribert; Assefa, Fassil

    2014-01-01

    Various parameters were measured during a 90-day composting process of coffee husk with cow dung (Pile 1), with fruit/vegetable wastes (Pile 2) and coffee husk alone (Pile 3). Samples were collected on days 0, 32 and 90 for chemical and microbiological analyses. C/N ratios of Piles 1 and 2 decreased significantly over the 90 days. The highest bacterial counts at the start of the process and highest actinobacterial counts at the end of the process (Piles 1 and 2) indicated microbial succession with concomitant production of compost relevant enzymes. Denaturing gradient gel electrophoresis of rDNA and COMPOCHIP microarray analysis indicated distinctive community shifts during the composting process, with day 0 samples clustering separately from the 32 and 90-day samples. This study, using a multi-parameter approach, has revealed differences in quality and species diversity of the three composts. PMID:24369846

  12. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  13. On-line Monitoring for Cutting Tool Wear Condition Based on the Parameters

    NASA Astrophysics Data System (ADS)

    Han, Fenghua; Xie, Feng

    2017-07-01

    In the process of cutting tools, it is very important to monitor the working state of the tools. On the basis of acceleration signal acquisition under the constant speed, time domain and frequency domain analysis of relevant indicators monitor the online of tool wear condition. The analysis results show that the method can effectively judge the tool wear condition in the process of machining. It has certain application value.

  14. Process influences and correction possibilities for high precision injection molded freeform optics

    NASA Astrophysics Data System (ADS)

    Dick, Lars; Risse, Stefan; Tünnermann, Andreas

    2016-08-01

    Modern injection molding processes offer a cost-efficient method for manufacturing high precision plastic optics for high volume applications. Besides form deviation of molded freeform optics, internal material stress is a relevant influencing factor for the functionality of a freeform optics in an optical system. This paper illustrates dominant influence parameters of an injection molding process relating to form deviation and internal material stress based on a freeform demonstrator geometry. Furthermore, a deterministic and efficient way for 3D mold correcting of systematic, asymmetrical shrinkage errors is shown to reach micrometer range shape accuracy at diameters up to 40 mm. In a second case, a stress-optimized parameter combination using unusual molding conditions was 3D corrected to reach high precision and low stress freeform polymer optics.

  15. Multi-surface topography targeted plateau honing for the processing of cylinder liner surfaces of automotive engines

    NASA Astrophysics Data System (ADS)

    Lawrence, K. Deepak; Ramamoorthy, B.

    2016-03-01

    Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.

  16. Free Electron coherent sources: From microwave to X-rays

    NASA Astrophysics Data System (ADS)

    Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio

    2018-04-01

    The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.

  17. Electric field control in DC cable test termination by nano silicone rubber composite

    NASA Astrophysics Data System (ADS)

    Song, Shu-Wei; Li, Zhongyuan; Zhao, Hong; Zhang, Peihong; Han, Baozhong; Fu, Mingli; Hou, Shuai

    2017-07-01

    The electric field distributions in high voltage direct current cable termination are investigated with silicone rubber nanocomposite being the electric stress control insulator. The nanocomposite is composed of silicone rubber, nanoscale carbon black and graphitic carbon. The experimental results show that the physical parameters of the nanocomposite, such as thermal activation energy and nonlinearity-relevant coefficient, can be manipulated by varying the proportion of the nanoscale fillers. The numerical simulation shows that safe electric field distribution calls for certain parametric region of the thermal activation energy and nonlinearity-relevant coefficient. Outside the safe parametric region, local maximum of electric field strength around the stress cone appears in the termination insulator, enhancing the breakdown of the cable termination. In the presence of the temperature gradient, thermal activation energy and nonlinearity-relevant coefficient work as complementary factors to produce a reasonable electric field distribution. The field maximum in the termination insulator show complicate variation in the transient processes. The stationary field distribution favors the increase of the nonlinearity-relevant coefficient; for the transient field distribution in the process of negative lighting impulse, however, an optimized value of the nonlinearity-relevant coefficient is necessary to equalize the electric field in the termination.

  18. 40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...

  19. 40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...

  20. 40 CFR Appendix A to Subpart L of... - Criteria for Evaluating a State's Proposed NEPA-Like Process

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the SERP will adequately consider: (1) Designation of a study area comparable to the final system; (2... impacts; (4) Present and future conditions; (5) Land use and other social parameters including relevant... (residential, commercial, institutional, and industrial) within the project study area; and (8) Other...

  1. 40 CFR Appendix A to Subpart L of... - Criteria for Evaluating a State's Proposed NEPA-Like Process

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the SERP will adequately consider: (1) Designation of a study area comparable to the final system; (2... impacts; (4) Present and future conditions; (5) Land use and other social parameters including relevant... (residential, commercial, institutional, and industrial) within the project study area; and (8) Other...

  2. 40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...

  3. Honing process optimization algorithms

    NASA Astrophysics Data System (ADS)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  4. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    PubMed

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  6. Detection of cylinder unbalance from Bayesian inference combining cylinder pressure and vibration block measurement in a Diesel engine

    NASA Astrophysics Data System (ADS)

    Nguyen, Emmanuel; Antoni, Jerome; Grondin, Olivier

    2009-12-01

    In the automotive industry, the necessary reduction of pollutant emission for new Diesel engines requires the control of combustion events. This control is efficient provided combustion parameters such as combustion occurrence and combustion energy are relevant. Combustion parameters are traditionally measured from cylinder pressure sensors. However this kind of sensor is expensive and has a limited lifetime. Thus this paper proposes to use only one cylinder pressure on a multi-cylinder engine and to extract combustion parameters from the other cylinders with low cost knock sensors. Knock sensors measure the vibration circulating on the engine block, hence they do not all contain the information on the combustion processes, but they are also contaminated by other mechanical noises that disorder the signal. The question is how to combine the information coming from one cylinder pressure and knock sensors to obtain the most relevant combustion parameters in all engine cylinders. In this paper, the issue is addressed trough the Bayesian inference formalism. In that cylinder where a cylinder pressure sensor is mounted, combustion parameters will be measured directly. In the other cylinders, they will be measured indirectly from Bayesian inference. Experimental results obtained on a four cylinder Diesel engine demonstrate the effectiveness of the proposed algorithm toward that purpose.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  8. Spectral Induced Polarization approaches to characterize reactive transport parameters and processes

    NASA Astrophysics Data System (ADS)

    Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.

    2017-12-01

    For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.

  9. Thermomechanical Simulation of the Splashing of Ceramic Droplets on a Rigid Substrate

    NASA Astrophysics Data System (ADS)

    Bertagnolli, Mauro; Marchese, Maurizio; Jacucci, Gianni; St. Doltsinis, Ioannis; Noelting, Swen

    1997-05-01

    Finite element simulation techniques have been applied to the spreading process of single ceramic liquid droplets impacting on a flat cold surface under plasma-spraying conditions. The goal of the present investigation is to predict the geometrical form of the splat as a function of technological process parameters, such as initial temperature and velocity, and to follow the thermal field developing in the droplet up to solidification. A non-linear finite element programming system has been utilized in order to model the complex physical phenomena involved in the present impact process. The Lagrangean description of the motion of the viscous melt in the drops, as constrained by surface tension and the developing contact with the target, has been coupled to an analysis of transient thermal phenomena accounting also for the solidification of the material. The present study refers to a parameter spectrum as from experimental data of technological relevance. The significance of process parameters for the most pronounced physical phenomena is discussed as are also the consequences of modelling. We consider the issue of solidification as well and touch on the effect of partially unmelted material.

  10. In-line monitoring of the coffee roasting process with near infrared spectroscopy: Measurement of sucrose and colour.

    PubMed

    Santos, João Rodrigo; Viegas, Olga; Páscoa, Ricardo N M J; Ferreira, Isabel M P L V O; Rangel, António O S S; Lopes, João Almeida

    2016-10-01

    In this work, a real-time and in-situ analytical tool based on near infrared spectroscopy is proposed to predict two of the most relevant coffee parameters during the roasting process, sucrose and colour. The methodology was developed taking in consideration different coffee varieties (Arabica and Robusta), coffee origins (Brazil, East-Timor, India and Uganda) and roasting process procedures (slow and fast). All near infrared spectroscopy-based calibrations were developed resorting to partial least squares regression. The results proved the suitability of this methodology as demonstrated by range-error-ratio and coefficient of determination higher than 10 and 0.85 respectively, for all modelled parameters. The relationship between sucrose and colour development during the roasting process is further discussed, in light of designing in real-time coffee products with similar visual appearance and distinct organoleptic profile. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Disentangling inhibition-based and retrieval-based aftereffects of distractors: Cognitive versus motor processes.

    PubMed

    Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian

    2018-05-01

    Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Utilizing Controlled Vibrations in a Microgravity Environment to Understand and Promote Microstructural Homogeneity During Floating-Zone Crystal Growth

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.

    1999-01-01

    It has been demonstrated in floating-zone configurations utilizing silicone oil and nitrate salts that mechanically induced vibration effectively minimizes detrimental, gravity independent, thermocapillary flow. The processing parameters leading to crystal improvement and aspects of the on-going modeling effort are discussed. Plans for applying the crystal growth technique to commercially relevant materials, e.g., silicon, as well as the value of processing in a microgravity environment are presented.

  13. Prestimulus influences on auditory perception from sensory representations and decision processes.

    PubMed

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  14. Prestimulus influences on auditory perception from sensory representations and decision processes

    PubMed Central

    McNair, Steven W.

    2016-01-01

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110

  15. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  16. Thermal effects of laser marking on microstructure and corrosion properties of stainless steel.

    PubMed

    Švantner, M; Kučera, M; Smazalová, E; Houdková, Š; Čerstvý, R

    2016-12-01

    Laser marking is an advanced technique used for modification of surface optical properties. This paper presents research on the influence of laser marking on the corrosion properties of stainless steel. Processes during the laser beam-surface interaction cause structure and color changes and can also be responsible for reduction of corrosion resistance of the surface. Corrosion tests, roughness, microscopic, energy dispersive x-ray, grazing incidence x-ray diffraction, and ferrite content analyses were carried out. It was found that increasing heat input is the most crucial parameter regarding the degradation of corrosion resistance of stainless steel. Other relevant parameters include the pulse length and pulse frequency. The authors found a correlation between laser processing parameters, grazing incidence x-ray measurement, ferrite content, and corrosion resistance of the affected surface. Possibilities and limitations of laser marking of stainless steel in the context of the reduction of its corrosion resistance are discussed.

  17. Surgeon Reported Outcome Measure for Spine Trauma: An International Expert Survey Identifying Parameters Relevant for the Outcome of Subaxial Cervical Spine Injuries.

    PubMed

    Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur

    2016-12-15

    International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.

  18. Laser cutting: industrial relevance, process optimization, and laser safety

    NASA Astrophysics Data System (ADS)

    Haferkamp, Heinz; Goede, Martin; von Busse, Alexander; Thuerk, Oliver

    1998-09-01

    Compared to other technological relevant laser machining processes, up to now laser cutting is the application most frequently used. With respect to the large amount of possible fields of application and the variety of different materials that can be machined, this technology has reached a stable position within the world market of material processing. Reachable machining quality for laser beam cutting is influenced by various laser and process parameters. Process integrated quality techniques have to be applied to ensure high-quality products and a cost effective use of the laser manufacturing plant. Therefore, rugged and versatile online process monitoring techniques at an affordable price would be desirable. Methods for the characterization of single plant components (e.g. laser source and optical path) have to be substituted by an omnivalent control system, capable of process data acquisition and analysis as well as the automatic adaptation of machining and laser parameters to changes in process and ambient conditions. At the Laser Zentrum Hannover eV, locally highly resolved thermographic measurements of the temperature distribution within the processing zone using cost effective measuring devices are performed. Characteristic values for cutting quality and plunge control as well as for the optimization of the surface roughness at the cutting edges can be deducted from the spatial distribution of the temperature field and the measured temperature gradients. Main influencing parameters on the temperature characteristic within the cutting zone are the laser beam intensity and pulse duration in pulse operation mode. For continuous operation mode, the temperature distribution is mainly determined by the laser output power related to the cutting velocity. With higher cutting velocities temperatures at the cutting front increase, reaching their maximum at the optimum cutting velocity. Here absorption of the incident laser radiation is drastically increased due to the angle between the normal of the cutting front and the laser beam axis. Beneath process optimization and control further work is focused on the characterization of particulate and gaseous laser generated air contaminants and adequate safety precautions like exhaust and filter systems.

  19. Study on the relevance of some of the description methods for plateau-honed surfaces

    NASA Astrophysics Data System (ADS)

    Yousfi, M.; Mezghani, S.; Demirci, I.; El Mansori, M.

    2014-01-01

    Much work has been undertaken in recent years into the determination of a complete parametric description of plateau-honed surfaces with the intention of making a link between the process conditions, the surface topography and the required functional performances. Different advanced techniques (plateau/valleys decomposition using the normalized Abbott-Firestone curve or morphological operators, multiscale decomposition using continuous wavelets transform, etc) were proposed and applied in different studies. This paper re-examines the current state of developments and addresses a discussion on the relevance of the different proposed parameters and characterization methods for plateau-honed surfaces by considering the control loop manufacturing-characterization-function. The relevance of appropriate characterization is demonstrated through two experimental studies. They consider the effect of the most plateau honing process variables (the abrasive grit size and abrasive indentation velocity in finish-honing and the plateau-honing stage duration and pressure) on cylinder liner surface textures and hydrodynamic friction of the ring-pack system.

  20. Review on innovative techniques in oil sludge bioremediation

    NASA Astrophysics Data System (ADS)

    Mahdi, Abdullah M. El; Aziz, Hamidi Abdul; Eqab, Eqab Sanoosi

    2017-10-01

    Petroleum hydrocarbon waste is produced in worldwide refineries in significant amount. In Libya, approximately 10,000 tons of oil sludge is generated in oil refineries (hydrocarbon waste mixtures) annually. Insufficient treatment of those wastes can threaten the human health and safety as well as our environment. One of the major challenges faced by petroleum refineries is the safe disposal of oil sludge generated during the cleaning and refining process stages of crude storage facilities. This paper reviews the hydrocarbon sludge characteristics and conventional methods for remediation of oil hydrocarbon from sludge. This study intensively focuses on earlier literature to describe the recently selected innovation technology in oily hydrocarbon sludge bioremediation process. Conventional characterization parameters or measurable factors can be gathered in chemical, physical, and biological parameters: (1) Chemical parameters are consequently necessary in the case of utilization of topsoil environment when they become relevant to the presence of nutrients and toxic compounds; (2) Physical parameters provide general data on sludge process and hand ability; (3) Biological parameters provide data on microbial activity and organic matter presence, which will be used to evaluate the safety of the facilities. The objective of this research is to promote the bioremediating oil sludge feasibility from Marsa El Hariga Terminal and Refinery (Tobruk).

  1. Theoretical and experimental analysis of injection seeding a Q-switched alexandrite laser

    NASA Technical Reports Server (NTRS)

    Prasad, C. R.; Lee, H. S.; Glesne, T. R.; Monosmith, B.; Schwemmer, G. K.

    1991-01-01

    Injection seeding is a method for achieving linewidths of less than 500 MHz in the output of broadband, tunable, solid state lasers. Dye lasers, CW and pulsed diode lasers, and other solid state lasers have been used as injection seeders. By optimizing the fundamental laser parameters of pump energy, Q-switched pulse build-up time, injection seed power and mode matching, one can achieve significant improvements in the spectral purity of the Q-switched output. These parameters are incorporated into a simple model for analyzing spectral purity and pulse build-up processes in a Q-switched, injection-seeded laser. Experiments to optimize the relevant parameters of an alexandrite laser show good agreement.

  2. A high-throughput 2D-analytical technique to obtain single protein parameters from complex cell lysates for in silico process development of ion exchange chromatography.

    PubMed

    Kröner, Frieder; Elsäßer, Dennis; Hubbuch, Jürgen

    2013-11-29

    The accelerating growth of the market for biopharmaceutical proteins, the market entry of biosimilars and the growing interest in new, more complex molecules constantly pose new challenges for bioseparation process development. In the presented work we demonstrate the application of a multidimensional, analytical separation approach to obtain the relevant physicochemical parameters of single proteins in a complex mixture for in silico chromatographic process development. A complete cell lysate containing a low titre target protein was first fractionated by multiple linear salt gradient anion exchange chromatography (AEC) with varying gradient length. The collected fractions were subsequently analysed by high-throughput capillary gel electrophoresis (HT-CGE) after being desalted and concentrated. From the obtained data of the 2D-separation the retention-volumes and the concentration of the single proteins were determined. The retention-volumes of the single proteins were used to calculate the related steric-mass action model parameters. In a final evaluation experiment the received parameters were successfully applied to predict the retention behaviour of the single proteins in salt gradient AEC. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Automated Array Assembly Task In-depth Study of Silicon Wafer Surface Texturizing

    NASA Technical Reports Server (NTRS)

    Jones, G. T.; Chitre, S.; Rhee, S. S.; Allison, K. L.

    1979-01-01

    A low cost wafer surface texturizing process was studied. An investigation of low cost cleaning operations to clean residual wax and organics from the surface of silicon wafers was made. The feasibility of replacing dry nitrogen with clean dry air for drying silicon wafers was examined. The two stage texturizing process was studied for the purpose of characterizing relevant parameters in large volume applications. The effect of gettering solar cells on photovoltaic energy conversion efficiency is described.

  4. Meteorite-asteroid spectral comparison - The effects of comminution, melting, and recrystallization

    NASA Technical Reports Server (NTRS)

    Clark, Beth E.; Fanale, Fraser P.; Salisbury, John W.

    1992-01-01

    The present laboratory simulation of possible spectral-alteration effects on the optical surface of ordinary chondrite parent bodies duplicated regolith processes through comminution of the samples to finer rain sizes. After reflectance spectra characterization, the comminuted samples were melted, crystallized, recomminuted, and again characterized. While individual spectral characteristics could be significantly changed by these processes, no combination of the alteration procedures appeared capable of affecting all relevant parameters in a way that improved the match between chondritic meteorites and S-class asteroids.

  5. Coffee husk composting: an investigation of the process using molecular and non-molecular tools.

    PubMed

    Shemekite, Fekadu; Gómez-Brandón, María; Franke-Whittle, Ingrid H; Praehauser, Barbara; Insam, Heribert; Assefa, Fassil

    2014-03-01

    Various parameters were measured during a 90-day composting process of coffee husk with cow dung (Pile 1), with fruit/vegetable wastes (Pile 2) and coffee husk alone (Pile 3). Samples were collected on days 0, 32 and 90 for chemical and microbiological analyses. C/N ratios of Piles 1 and 2 decreased significantly over the 90 days. The highest bacterial counts at the start of the process and highest actinobacterial counts at the end of the process (Piles 1 and 2) indicated microbial succession with concomitant production of compost relevant enzymes. Denaturing gradient gel electrophoresis of rDNA and COMPOCHIP microarray analysis indicated distinctive community shifts during the composting process, with day 0 samples clustering separately from the 32 and 90-day samples. This study, using a multi-parameter approach, has revealed differences in quality and species diversity of the three composts. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Optimization and Surface Modification of Al-6351 Alloy Using SiC-Cu Green Compact Electrode by Electro Discharge Coating Process

    NASA Astrophysics Data System (ADS)

    Chakraborty, Sujoy; Kar, Siddhartha; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-06-01

    This paper introduces the surface modification of Al-6351 alloy by green compact SiC-Cu electrode using electro-discharge coating (EDC) process. A Taguchi L-16 orthogonal array is employed to investigate the process by varying tool parameters like composition and compaction load and electro-discharge machining (EDM) parameters like pulse-on time and peak current. Material deposition rate (MDR), tool wear rate (TWR) and surface roughness (SR) are measured on the coated specimens. An optimum condition is achieved by formulating overall evaluation criteria (OEC), which combines multi-objective task into a single index. The signal-to-noise (S/N) ratio, and the analysis of variance (ANOVA) is employed to investigate the effect of relevant process parameters. A confirmation test is conducted based on optimal process parameters and experimental results are provided to illustrate the effectiveness of this approach. The modified surface is characterized by optical microscope and X-ray diffraction (XRD) analysis. XRD analysis of the deposited layer confirmed the transfer of tool materials to the work surface and formation of inter-metallic phases. The micro-hardness of the resulting composite layer is also measured which is 1.5-3 times more than work material’s one and highest layer thickness (LT) of 83.644μm has been successfully achieved.

  7. Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database

    PubMed Central

    Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.

    2013-01-01

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709

  8. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  9. Registering parameters and granules of wave observations: IMAGE RPI success story

    NASA Astrophysics Data System (ADS)

    Galkin, I. A.; Charisi, A.; Fung, S. F.; Benson, R. F.; Reinisch, B. W.

    2015-12-01

    Modern metadata systems strive to help scientists locate data relevant to their research and then retrieve them quickly. Success of this mission depends on the organization and completeness of metadata. Each relevant data resource has to be registered; each content has to be described; each data file has to be accessible. Ultimately, data discoverability is about the practical ability to describe data content and location. Correspondingly, data registration has a "Parameter" level, at which content is specified by listing available observed properties (parameters), and a "Granule" level, at which download links are given to data records (granules). Until recently, both parameter- and granule-level data registrations were accomplished at NASA Virtual System Observatory easily by listing provided parameters and building Granule documents with URLs to the datafile locations, usually those at NASA CDAWeb data warehouse. With the introduction of the Virtual Wave Observatory (VWO), however, the parameter/granule concept faced a scalability challenge. The wave phenomenon content is rich with descriptors of the wave generation, propagation, interaction with propagation media, and observation processes. Additionally, the wave phenomenon content varies from record to record, reflecting changes in the constituent processes, making it necessary to generate granule documents at sub-minute resolution. We will present the first success story of registering 234,178 records of IMAGE Radio Plasma Imager (RPI) plasmagram data and Level 2 derived data products in ESPAS (near-Earth Space Data Infrastructure for e-Science), using the VWO-inspired wave ontology. The granules are arranged in overlapping display and numerical data collections. Display data include (a) auto-prospected plasmagrams of potential interest, (b) interesting plasmagrams annotated by human analysts or software, and (c) spectacular plasmagrams annotated by analysts as publication-quality examples of the RPI science. Numerical data products include plasmagram-derived records containing signatures of local and remote signal propagation, as well as field-aligned profiles of electron density in the plasmasphere. Registered granules of RPI observations are available in ESPAS for their content-targeted search and retrieval.

  10. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  11. The solution of private problems for optimization heat exchangers parameters

    NASA Astrophysics Data System (ADS)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  12. The muon g - 2 for low-mass pseudoscalar Higgs in the general 2HDM

    NASA Astrophysics Data System (ADS)

    Cherchiglia, Adriano; Stöckinger, Dominik; Stöckinger-Kim, Hyejung

    2018-05-01

    The two-Higgs doublet model is a simple and attractive extension of the Standard Model. It provides a possibility to explain the large deviation between theory and experiment in the muon g - 2 in an interesting parameter region: light pseudoscalar Higgs A, large Yukawa coupling to τ-leptons, and general, non-type II Yukawa couplings are preferred. This parameter region is explored, experimental limits on the relevant Yukawa couplings are obtained, and the maximum possible contributions to the muon g - 2 are discussed. Presented at Workshop on Flavour Changing and Conserving Processes (FCCP2017), September 2017

  13. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  14. Using a familiar risk comparison within a risk ladder to improve risk understanding by low numerates: a study of visual attention.

    PubMed

    Keller, Carmen

    2011-07-01

    Previous experimental research provides evidence that a familiar risk comparison within a risk ladder is understood by low- and high-numerate individuals. It especially helps low numerates to better evaluate risk. In the present study, an eye tracker was used to capture individuals' visual attention to a familiar risk comparison, such as the risk associated with smoking. Two parameters of information processing-efficiency and level-were derived from visual attention. A random sample of participants from the general population (N= 68) interpreted a given risk level with the help of the risk ladder. Numeracy was negatively correlated with overall visual attention on the risk ladder (r(s) =-0.28, p= 0.01), indicating that the lower the numeracy, the more the time spent looking at the whole risk ladder. Numeracy was positively correlated with the efficiency of processing relevant frequency (r(s) = 0.34, p < 0.001) and relevant textual information (r(s) = 0.34, p < 0.001), but not with the efficiency of processing relevant comparative information and numerical information. There was a significant negative correlation between numeracy and the level of processing of relevant comparative risk information (r(s) =-0.21, p < 0.01), indicating that low numerates processed the comparative risk information more deeply than the high numerates. There was no correlation between numeracy and perceived risk. These results add to previous experimental research, indicating that the smoking risk comparison was crucial for low numerates to evaluate and understand risk. Furthermore, the eye-tracker method is promising for studying information processing and improving risk communication formats. © 2011 Society for Risk Analysis.

  15. Effect of pilot-scale aseptic processing on tomato soup quality parameters.

    PubMed

    Colle, Ines J P; Andrys, Anna; Grundelius, Andrea; Lemmens, Lien; Löfgren, Anders; Buggenhout, Sandy Van; Loey, Ann; Hendrickx, Marc Van

    2011-01-01

    Tomatoes are often processed into shelf-stable products. However, the different processing steps might have an impact on the product quality. In this study, a model tomato soup was prepared and the impact of pilot-scale aseptic processing, including heat treatment and high-pressure homogenization, on some selected quality parameters was evaluated. The vitamin C content, the lycopene isomer content, and the lycopene bioaccessibility were considered as health-promoting attributes. As a structural characteristic, the viscosity of the tomato soup was investigated. A tomato soup without oil as well as a tomato soup containing 5% olive oil were evaluated. Thermal processing had a negative effect on the vitamin C content, while lycopene degradation was limited. For both compounds, high-pressure homogenization caused additional losses. High-pressure homogenization also resulted in a higher viscosity that was accompanied by a decrease in lycopene bioaccessibility. The presence of lipids clearly enhanced the lycopene isomerization susceptibility and improved the bioaccessibility. The results obtained in this study are of relevance for product formulation and process design of tomato-based food products. © 2011 Institute of Food Technologists®

  16. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  17. Process observation in fiber laser-based selective laser melting

    NASA Astrophysics Data System (ADS)

    Thombansen, Ulrich; Gatej, Alexander; Pereira, Milton

    2015-01-01

    The process observation in selective laser melting (SLM) focuses on observing the interaction point where the powder is processed. To provide process relevant information, signals have to be acquired that are resolved in both time and space. Especially in high-power SLM, where more than 1 kW of laser power is used, processing speeds of several meters per second are required for a high-quality processing results. Therefore, an implementation of a suitable process observation system has to acquire a large amount of spatially resolved data at low sampling speeds or it has to restrict the acquisition to a predefined area at a high sampling speed. In any case, it is vitally important to synchronously record the laser beam position and the acquired signal. This is a prerequisite that allows the recorded data become information. Today, most SLM systems employ f-theta lenses to focus the processing laser beam onto the powder bed. This report describes the drawbacks that result for process observation and suggests a variable retro-focus system which solves these issues. The beam quality of fiber lasers delivers the processing laser beam to the powder bed at relevant focus diameters, which is a key prerequisite for this solution to be viable. The optical train we present here couples the processing laser beam and the process observation coaxially, ensuring consistent alignment of interaction zone and observed area. With respect to signal processing, we have developed a solution that synchronously acquires signals from a pyrometer and the position of the laser beam by sampling the data with a field programmable gate array. The relevance of the acquired signals has been validated by the scanning of a sample filament. Experiments with grooved samples show a correlation between different powder thicknesses and the acquired signals at relevant processing parameters. This basic work takes a first step toward self-optimization of the manufacturing process in SLM. It enables the addition of cognitive functions to the manufacturing system to the extent that the system could track its own process. The results are based on analyzing and redesigning the optical train, in combination with a real-time signal acquisition system which provides a solution to certain technological barriers.

  18. Documentation of a ground hydrology parameterization for use in the GISS atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, J. D.; Aleano, J.; Bock, P.

    1978-01-01

    The moisture transport processes related to the earth's surface relevant to the ground circulation model GCM are presented. The GHM parametrizations considered are: (1) ground wetness and soil parameters; (2) precipitation; (3) evapotranspiration; (4) surface storage of snow and ice; and (5) runout. The computational aspects of the GHM using computer programs and flow charts are described.

  19. QbD for pediatric oral lyophilisates development: risk assessment followed by screening and optimization.

    PubMed

    Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan

    2017-12-01

    This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.

  20. All half-lives are wrong, but some half-lives are useful.

    PubMed

    Wright, J G; Boddy, A V

    2001-01-01

    The half-life of a drug, which expresses a change in concentration in units of time, is perhaps the most easily understood pharmacokinetic parameter and provides a succinct description of many concentration-time profiles. The calculation of a half-life implies a linear, first-order, time-invariant process. No drug perfectly obeys such assumptions, although in practise this is often a valid approximation and provides invaluable quantitative information. Nevertheless, the physiological processes underlying half-life should not be forgotten. The concept of clearance facilitates the interpretation of factors affecting drug elimination, such as enzyme inhibition or renal impairment. Relating clearance to the observed concentration-time profile is not as naturally intuitive as is the case with half-life. As such, these 2 approaches to parameterising a linear pharmacokinetic model should be viewed as complementary rather than alternatives. The interpretation of pharmacokinetic parameters when there are multiple disposition phases is more challenging. Indeed, in any pharmacokinetic model, the half-lives are only one component of the parameters required to specify the concentration-time profile. Furthermore, pharmacokinetic parameters are of little use without a dose history. Other factors influencing the relevance of each disposition phase to clinical end-points must also be considered. In summarising the pharmacokinetics of a drug, statistical aspects of the estimation of a half-life are often overlooked. Half-lives are rarely reported with confidence intervals or measures of variability in the population, and some approaches to this problem are suggested. Half-life is an important summary statistic in pharmacokinetics, but care must be taken to employ it appropriately in the context of dose history and clinically relevant pharmacodynamic end-points.

  1. Assessment of groundwater vulnerability to pollution: a combination of GIS, fuzzy logic and decision making techniques

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Petalas, Christos; Tsihrintzis, Vassilios A.; Pisinaras, Vassilios

    2006-03-01

    The assessment of groundwater vulnerability to pollution aims at highlighting areas at a high risk of being polluted. This study presents a methodology, to estimate the risk of an aquifer to be polluted from concentrated and/or dispersed sources, which applies an overlay and index method involving several parameters. The parameters are categorized into three factor groups: factor group 1 includes parameters relevant to the internal aquifer system’s properties, thus determining the intrinsic aquifer vulnerability to pollution; factor group 2 comprises parameters relevant to the external stresses to the system, such as human activities and rainfall effects; factor group 3 incorporates specific geological settings, such as the presence of geothermal fields or salt intrusion zones, into the computation process. Geographical information systems have been used for data acquisition and processing, coupled with a multicriteria evaluation technique enhanced with fuzzy factor standardization. Moreover, besides assigning weights to factors, a second set of weights, i.e., order weights, has been applied to factors on a pixel by pixel basis, thus allowing control of the level of risk in the vulnerability determination and the enhancement of local site characteristics. Individual analysis of each factor group resulted in three intermediate groundwater vulnerability to pollution maps, which were combined in order to produce the final composite groundwater vulnerability map for the study area. The method has been applied in the region of Eastern Macedonia and Thrace (Northern Greece), an area of approximately 14,000 km2. The methodology has been tested and calibrated against the measured nitrate concentration in wells, in the northwest part of the study area, providing results related to the aggregation and weighting procedure.

  2. Roughness characterization of the galling of metals

    NASA Astrophysics Data System (ADS)

    Hubert, C.; Marteau, J.; Deltombe, R.; Chen, Y. M.; Bigerelle, M.

    2014-09-01

    Several kinds of tests exist to characterize the galling of metals, such as that specified in ASTM Standard G98. While the testing procedure is accurate and robust, the analysis of the specimen's surfaces (area=1.2 cm) for the determination of the critical pressure of galling remains subject to operator judgment. Based on the surface's topography analyses, we propose a methodology to express the probability of galling according to the macroscopic pressure load. After performing galling tests on 304L stainless steel, a two-step segmentation of the S q parameter (root mean square of surface amplitude) computed from local roughness maps (100 μ m× 100 μ m) enables us to distinguish two tribological processes. The first step represents the abrasive wear (erosion) and the second one the adhesive wear (galling). The total areas of both regions are highly relevant to quantify galling and erosion processes. Then, a one-parameter phenomenological model is proposed to objectively determine the evolution of non-galled relative area A e versus the pressure load P, with high accuracy ({{A}e}=100/(1+a{{P}2}) with a={{0.54}+/- 0.07}× {{10}-3} M P{{a}-2} and with {{R}2}=0.98). From this model, the critical pressure of galling is found to be equal to 43MPa. The {{S}5 V} roughness parameter (the five deepest valleys in the galled region's surface) is the most relevant roughness parameter for the quantification of damages in the ‘galling region’. The significant valleys’ depths increase from 10 μm-250 μm when the pressure increases from 11-350 MPa, according to a power law ({{S}5 V}=4.2{{P}0.75}, with {{R}2}=0.93).

  3. Studies on Thorium Adsorption Characteristics upon Activated Titanium Hydroxide Prepared from Rosetta Ilmenite Concentrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gado, M, E-mail: parq28@yahoo.com; Zaki, S

    2016-01-01

    The titanium hydroxide prepared from Rosetta ilmenite concentrate has been applied for Th (IV) adsorption from its acid aqueous solutions. The prepared hydroxide is first characterized by both Fourier transform infrared (FT-IR) spectrum and thermogravimetric analysis. The relevant factors affecting the adsorption process have been studied. The obtained equilibrium data fits well with the Langmuir isotherm rather than Freundlich isotherm, while the adsorption kinetic data follow the pseudo-second order model. The different thermodynamic parameters have also been calculated and indicate that the adsorption process is spontaneous.

  4. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  5. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    PubMed

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  6. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  7. A study on the role of powertrain system dynamics on vehicle driveability

    NASA Astrophysics Data System (ADS)

    Castellazzi, Luca; Tonoli, Andrea; Amati, Nicola; Galliera, Enrico

    2017-07-01

    Vehicle driveability describes the complex interactions between the driver and the vehicle, mainly related to longitudinal vibrations. Today, a relevant part of the driveability process optimisation is realised by means of track tests, which require a considerable effort due to the number of parameters (such as stiffness and damping components) affecting this behaviour. The drawback of this approach is that it is carried on at a stage when a design iteration becomes very expensive in terms of time and cost. The objective of this work is to propose a light and accurate tool to represent the relevant quantities involved in the driveability analysis, and to understand which are the main vehicle parameters that influence the torsional vibrations transmitted to the driver. Particular attention is devoted to the role of the tyre, the engine mount, the dual mass flywheel and their possible interactions. The presented nonlinear dynamic model has been validated in time and frequency domain and, through linearisation of its nonlinear components, allows to exploit modal and energy analysis. Objective indexes regarding the driving comfort are additionally considered in order to evaluate possible driveability improvements related to the sensitivity of powertrain parameters.

  8. Patterns of Carbon Nanotubes by Flow-Directed Deposition on Substrates with Architectured Topographies.

    PubMed

    K Jawed, M; Hadjiconstantinou, N G; Parks, D M; Reis, P M

    2018-03-14

    We develop and perform continuum mechanics simulations of carbon nanotube (CNT) deployment directed by a combination of surface topography and rarefied gas flow. We employ the discrete elastic rods method to model the deposition of CNT as a slender elastic rod that evolves in time under two external forces, namely, van der Waals (vdW) and aerodynamic drag. Our results confirm that this self-assembly process is analogous to a previously studied macroscopic system, the "elastic sewing machine", where an elastic rod deployed onto a moving substrate forms nonlinear patterns. In the case of CNTs, the complex patterns observed on the substrate, such as coils and serpentines, result from an intricate interplay between van der Waals attraction, rarefied aerodynamics, and elastic bending. We systematically sweep through the multidimensional parameter space to quantify the pattern morphology as a function of the relevant material, flow, and geometric parameters. Our findings are in good agreement with available experimental data. Scaling analysis involving the relevant forces helps rationalize our observations.

  9. A hybrid model for river water temperature as a function of air temperature and discharge

    NASA Astrophysics Data System (ADS)

    Toffolon, Marco; Piccolroaz, Sebastiano

    2015-11-01

    Water temperature controls many biochemical and ecological processes in rivers, and theoretically depends on multiple factors. Here we formulate a model to predict daily averaged river water temperature as a function of air temperature and discharge, with the latter variable being more relevant in some specific cases (e.g., snowmelt-fed rivers, rivers impacted by hydropower production). The model uses a hybrid formulation characterized by a physically based structure associated with a stochastic calibration of the parameters. The interpretation of the parameter values allows for better understanding of river thermal dynamics and the identification of the most relevant factors affecting it. The satisfactory agreement of different versions of the model with measurements in three different rivers (root mean square error smaller than 1oC, at a daily timescale) suggests that the proposed model can represent a useful tool to synthetically describe medium- and long-term behavior, and capture the changes induced by varying external conditions.

  10. The influence of petroleum products on the methane fermentation process.

    PubMed

    Choromański, Paweł; Karwowska, Ewa; Łebkowska, Maria

    2016-01-15

    In this study the influence of the petroleum products: diesel fuel and spent engine oil on the sewage sludge digestion process and biogas production efficiency was investigated. Microbiological, chemical and enzymatic analyses were applied in the survey. It was revealed that the influence of the petroleum derivatives on the effectiveness of the methane fermentation of sewage sludge depends on the type of the petroleum product. Diesel fuel did not limit the biogas production and the methane concentration in the biogas, while spent engine oil significantly reduced the process efficacy. The changes in physical-chemical parameters, excluding COD, did not reflect the effect of the tested substances. The negative influence of petroleum products on individual bacterial groups was observed after 7 days of the process, while after 14 days probably some adaptive mechanisms appeared. The dehydrogenase activity assessment was the most relevant parameter to evaluate the effect of petroleum products contamination. Diesel fuel was probably used as a source of carbon and energy in the process, while the toxic influence was observed in case of spent engine oil. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. cm-scale variations of crystal orientation fabric in cold Alpine ice core from Colle Gnifetti

    NASA Astrophysics Data System (ADS)

    Kerch, Johanna; Weikusat, Ilka; Eisen, Olaf; Wagenbach, Dietmar; Erhardt, Tobias

    2015-04-01

    Analysis of the microstructural parameters of ice has been an important part of ice core analyses so far mainly in polar cores in order to obtain information about physical processes (e.g. deformation, recrystallisation) on the micro- and macro-scale within an ice body. More recently the influence of impurities and climatic conditions during snow accumulation on these processes has come into focus. A deeper understanding of how palaeoclimate proxies interact with physical properties of the ice matrix bears relevance for palaeoclimatic interpretations, improved geophysical measurement techniques and the furthering of ice dynamical modeling. Variations in microstructural parameters e.g. crystal orientation fabric or grain size can be observed on a scale of hundreds and tens of metres but also on a centimetre scale. The underlying processes are not necessarily the same on all scales. Especially for the short-scale variations many questions remain unanswered. We present results from a study that aims to investigate following hypotheses: 1. Variations in grain size and fabric, i.e. strong changes of the orientation of ice crystals with respect to the vertical, occur on a centimetre scale and can be observed in all depths of an ice core. 2. Palaeoclimate proxies like dust and impurities have an impact on the microstructural processes and thus are inducing the observed short-scale variations in grain size and fabric. 3. The interaction of proxies with the ice matrix leads to depth intervals that show correlating behaviour as well as ranges with anticorrelation between microstructural parameters and palaeoclimatic proxies. The respective processes need to be identified. Fabric Analyser measurements were conducted on more than 80 samples (total of 8 m) from different depth ranges of a cold Alpine ice core (72 m length) drilled in 2013 at Colle Gnifetti, Switzerland/Italy. Results were obtained by automatic image processing, providing estimates for grain size distributions and crystal orientation fabric, and comparison with data from continuous flow analysis of chemical impurities. A microstructural characterisation of the analysed core is presented with emphasis on the observed variations in crystal orientation fabric. The relevance of these results for palaeoclimate reconstruction and geophysical applications in ice are discussed.

  12. Photofragmentation, state interaction, and energetics of Rydberg and ion-pair states: Resonance enhanced multiphoton ionization of HI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hróðmarsson, Helgi Rafn; Wang, Huasheng; Kvaran, Ágúst, E-mail: agust@hi.is

    2014-06-28

    Mass resolved resonance enhanced multiphoton ionization data for hydrogen iodide (HI), for two-photon resonance excitation to Rydberg and ion-pair states in the 69 600–72 400 cm{sup −1} region were recorded and analyzed. Spectral perturbations due to homogeneous and heterogeneous interactions between Rydberg and ion-pair states, showing as deformations in line-positions, line-intensities, and line-widths, were focused on. Parameters relevant to photodissociation processes, state interaction strengths and spectroscopic parameters for deperturbed states were derived. Overall interaction and dynamical schemes to describe the observations are proposed.

  13. Analysis of Size Correlations for Microdroplets Produced by Ultrasonic Atomization

    PubMed Central

    Barba, Anna Angela; d'Amore, Matteo

    2013-01-01

    Microencapsulation techniques are widely applied in the field of pharmaceutical production to control drugs release in time and in physiological environments. Ultrasonic-assisted atomization is a new technique to produce microencapsulated systems by a mechanical approach. Interest in this technique is due to the advantages evidenceable (low level of mechanical stress in materials, reduced energy request, reduced apparatuses size) when comparing it to more conventional techniques. In this paper, the groundwork of atomization is introduced, the role of relevant parameters in ultrasonic atomization mechanism is discussed, and correlations to predict droplets size starting from process parameters and material properties are presented and tested. PMID:24501580

  14. Wavelets for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-10-01

    Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.

  15. Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.

    PubMed

    Keil, Andreas

    2006-01-01

    Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.

  16. Breath biomarkers for lung cancer detection and assessment of smoking related effects--confounding variables, influence of normalization and statistical algorithms.

    PubMed

    Kischkel, Sabine; Miekisch, Wolfram; Sawacki, Annika; Straker, Eva M; Trefz, Phillip; Amann, Anton; Schubert, Jochen K

    2010-11-11

    Up to now, none of the breath biomarkers or marker sets proposed for cancer recognition has reached clinical relevance. Possible reasons are the lack of standardized methods of sampling, analysis and data processing and effects of environmental contaminants. Concentration profiles of endogenous and exogenous breath markers were determined in exhaled breath of 31 lung cancer patients, 31 smokers and 31 healthy controls by means of SPME-GC-MS. Different correcting and normalization algorithms and a principal component analysis were applied to the data. Differences of exhalation profiles in cancer and non-cancer patients did not persist if physiology and confounding variables were taken into account. Smoking history, inspired substance concentrations, age and gender were recognized as the most important confounding variables. Normalization onto PCO2 or BSA or correction for inspired concentrations only partially solved the problem. In contrast, previous smoking behaviour could be recognized unequivocally. Exhaled substance concentrations may depend on a variety of parameters other than the disease under investigation. Normalization and correcting parameters have to be chosen with care as compensating effects may be different from one substance to the other. Only well-founded biomarker identification, normalization and data processing will provide clinically relevant information from breath analysis. 2010 Elsevier B.V. All rights reserved.

  17. Real-time control data wrangling for development of mathematical control models of technological processes

    NASA Astrophysics Data System (ADS)

    Vasilyeva, N. V.; Koteleva, N. I.; Fedorova, E. R.

    2018-05-01

    The relevance of the research is due to the need to stabilize the composition of the melting products of copper-nickel sulfide raw materials in the Vanyukov furnace. The goal of this research is to identify the most suitable methods for the aggregation of the real time data for the development of a mathematical model for control of the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace. Statistical methods of analyzing the historical data of the real technological object and the correlation analysis of process parameters are described. Factors that exert the greatest influence on the main output parameter (copper content in matte) and ensure the physical-chemical transformations are revealed. An approach to the processing of the real time data for the development of a mathematical model for control of the melting process is proposed. The stages of processing the real time information are considered. The adopted methodology for the aggregation of data suitable for the development of a control model for the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace allows us to interpret the obtained results for their further practical application.

  18. Chemometrics-based process analytical technology (PAT) tools: applications and adaptation in pharmaceutical and biopharmaceutical industries.

    PubMed

    Challa, Shruthi; Potumarthi, Ravichandra

    2013-01-01

    Process analytical technology (PAT) is used to monitor and control critical process parameters in raw materials and in-process products to maintain the critical quality attributes and build quality into the product. Process analytical technology can be successfully implemented in pharmaceutical and biopharmaceutical industries not only to impart quality into the products but also to prevent out-of-specifications and improve the productivity. PAT implementation eliminates the drawbacks of traditional methods which involves excessive sampling and facilitates rapid testing through direct sampling without any destruction of sample. However, to successfully adapt PAT tools into pharmaceutical and biopharmaceutical environment, thorough understanding of the process is needed along with mathematical and statistical tools to analyze large multidimensional spectral data generated by PAT tools. Chemometrics is a chemical discipline which incorporates both statistical and mathematical methods to obtain and analyze relevant information from PAT spectral tools. Applications of commonly used PAT tools in combination with appropriate chemometric method along with their advantages and working principle are discussed. Finally, systematic application of PAT tools in biopharmaceutical environment to control critical process parameters for achieving product quality is diagrammatically represented.

  19. 3D-liquid chromatography as a complex mixture characterization tool for knowledge-based downstream process development.

    PubMed

    Hanke, Alexander T; Tsintavi, Eleni; Ramirez Vazquez, Maria Del Pilar; van der Wielen, Luuk A M; Verhaert, Peter D E M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel

    2016-09-01

    Knowledge-based development of chromatographic separation processes requires efficient techniques to determine the physicochemical properties of the product and the impurities to be removed. These characterization techniques are usually divided into approaches that determine molecular properties, such as charge, hydrophobicity and size, or molecular interactions with auxiliary materials, commonly in the form of adsorption isotherms. In this study we demonstrate the application of a three-dimensional liquid chromatography approach to a clarified cell homogenate containing a therapeutic enzyme. Each separation dimension determines a molecular property relevant to the chromatographic behavior of each component. Matching of the peaks across the different separation dimensions and against a high-resolution reference chromatogram allows to assign the determined parameters to pseudo-components, allowing to determine the most promising technique for the removal of each impurity. More detailed process design using mechanistic models requires isotherm parameters. For this purpose, the second dimension consists of multiple linear gradient separations on columns in a high-throughput screening compatible format, that allow regression of isotherm parameters with an average standard error of 8%. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1283-1291, 2016. © 2016 American Institute of Chemical Engineers.

  20. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  1. Clustervision: Visual Supervision of Unsupervised Clustering.

    PubMed

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  2. Spatio-Temporal Regression Based Clustering of Precipitation Extremes in a Presence of Systematically Missing Covariates

    NASA Astrophysics Data System (ADS)

    Kaiser, Olga; Martius, Olivia; Horenko, Illia

    2017-04-01

    Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.

  3. Numerical study on injection parameters optimization of thin wall and biodegradable polymers parts

    NASA Astrophysics Data System (ADS)

    Santos, C.; Mendes, A.; Carreira, P.; Mateus, A.; Malça, C.

    2017-07-01

    Nowadays, the molds industry searches new markets, with diversified and added value products. The concept associated to the production of thin walled and biodegradable parts mostly manufactured by injection process has assumed a relevant importance due to environmental and economic factors. The growth of a global consciousness about the harmful effects of the conventional polymers in our life quality associated with the legislation imposed, become key factors for the choice of a particular product by the consumer. The target of this work is to provide an integrated solution for the injection of parts with thin walls and manufactured using biodegradable materials. This integrated solution includes the design and manufacture processes of the mold as well as to find the optimum values for the injection parameters in order to become the process effective and competitive. For this, the Moldflow software was used. It was demonstrated that this computational tool provides an effective responsiveness and it can constitute an important tool in supporting the injection molding of thin-walled and biodegradable parts.

  4. Dynamic quantitative photothermal monitoring of cell death of individual human red blood cells upon glucose depletion

    NASA Astrophysics Data System (ADS)

    Vasudevan, Srivathsan; Chen, George Chung Kit; Andika, Marta; Agarwal, Shuchi; Chen, Peng; Olivo, Malini

    2010-09-01

    Red blood cells (RBCs) have been found to undergo ``programmed cell death,'' or eryptosis, and understanding this process can provide more information about apoptosis of nucleated cells. Photothermal (PT) response, a label-free photothermal noninvasive technique, is proposed as a tool to monitor the cell death process of living human RBCs upon glucose depletion. Since the physiological status of the dying cells is highly sensitive to photothermal parameters (e.g., thermal diffusivity, absorption, etc.), we applied linear PT response to continuously monitor the death mechanism of RBC when depleted of glucose. The kinetics of the assay where the cell's PT response transforms from linear to nonlinear regime is reported. In addition, quantitative monitoring was performed by extracting the relevant photothermal parameters from the PT response. Twofold increases in thermal diffusivity and size reduction were found in the linear PT response during cell death. Our results reveal that photothermal parameters change earlier than phosphatidylserine externalization (used for fluorescent studies), allowing us to detect the initial stage of eryptosis in a quantitative manner. Hence, the proposed tool, in addition to detection of eryptosis earlier than fluorescence, could also reveal physiological status of the cells through quantitative photothermal parameter extraction.

  5. Collisional excitation of CO by H2O - An astrophysicist's guide to obtaining rate constants from coherent anti-Stokes Raman line shape data

    NASA Technical Reports Server (NTRS)

    Green, Sheldon

    1993-01-01

    Rate constants for excitation of CO by collisions with H2O are needed to understand recent observations of comet spectra. These collision rates are closely related to spectral line shape parameters, especially those for Raman Q-branch spectra. Because such spectra have become quite important for thermometry applications, much effort has been invested in understanding this process. Although it is not generally possible to extract state-to-state rate constants directly from the data as there are too many unknowns, if the matrix of state-to-state rates can be expressed in terms of a rate-law model which depends only on rotational quantum numbers plus a few parameters, the parameters can be determined from the data; this has been done with some success for many systems, especially those relevant to combustion processes. Although such an analysis has not yet been done for CO-H2O, this system is expected to behave similarly to N2-H2O which has been well studies; modifications of parameters for the latter system are suggested which should provide a reasonable description of rate constants for the former.

  6. A deterministic (non-stochastic) low frequency method for geoacoustic inversion.

    PubMed

    Tolstoy, A

    2010-06-01

    It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.

  7. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  8. EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Koepke, Mark

    2008-07-01

    The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.

  9. The state of the art of the impact of sampling uncertainty on measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Leite, V. J.; Oliveira, E. C.

    2018-03-01

    The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.

  10. Simulation of plasma loading of high-pressure RF cavities

    NASA Astrophysics Data System (ADS)

    Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.

    2018-01-01

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.

  11. Laboratory Modelling of Volcano Plumbing Systems: a review

    NASA Astrophysics Data System (ADS)

    Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi

    2015-04-01

    Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to understand the distinct key features of volcanic plumbing systems: dykes, cone sheets, sills, laccoliths, caldera-related structures, ground deformation, magma/fault interactions, and explosive vents. Barenblatt, G.I., 2003. Scaling. Cambridge University Press, Cambridge. Galland, O., Holohan, E.P., van Wyk de Vries, B., Burchardt, S., Accepted. Laboratory modelling of volcanic plumbing systems: A review, in: Breitkreuz, C., Rocchi, S. (Eds.), Laccoliths, sills and dykes: Physical geology of shallow level magmatic systems. Springer.

  12. Design of a Data Catalogue for Perdigão-2017 Field Experiment: Establishing the Relevant Parameters, Post-Processing Techniques and Users Access

    NASA Astrophysics Data System (ADS)

    Palma, J. L.; Belo-Pereira, M.; Leo, L. S.; Fernando, J.; Wildmann, N.; Gerz, T.; Rodrigues, C. V.; Lopes, A. S.; Lopes, J. C.

    2017-12-01

    Perdigão is the largest of a series of wind-mapping studies embedded in the on-going NEWA (New European Wind Atlas) Project. The intensive observational period of the Perdigão field experiment resulted in an unprecedented volume of data, covering several wind conditions through 46 consecutive days between May and June 2017. For researchers looking into specific events, it is time consuming to scrutinise the datasets looking for appropriate conditions. Such task becomes harder if the parameters of interest were not measured directly, instead requiring their computation from the raw datasets. This work will present the e-Science platform developed by University of Porto for the Perdigao dataset. The platform will assist scientists of Perdigao and the larger scientific community in extrapolating the datasets associated to specific flow regimes of interest as well as automatically performing post-processing/filtering operations internally in the platform. We will illustrate the flow regime categories identified in Perdigao based on several parameters such as weather type classification, cloud characteristics, as well as stability regime indicators (Brunt-Väisälä frequency, Scorer parameter, potential temperature inversion heights, dimensionless Richardson and Froude numbers) and wind regime indicators. Examples of some of the post-processing techniques available in the e-Science platform, such as the Savitzky-Golay low-pass filtering technique, will be also presented.

  13. Mesoscale, Radiometrically Referenced, Multi-Temporal Hyperspectral Data for Co2 Leak Detection by Locating Spatial Variation of Biophysically Relevant Parameters

    NASA Astrophysics Data System (ADS)

    McCann, Cooper Patrick

    Low-cost flight-based hyperspectral imaging systems have the potential to provide valuable information for ecosystem and environmental studies as well as aide in land management and land health monitoring. This thesis describes (1) a bootstrap method of producing mesoscale, radiometrically-referenced hyperspectral data using the Landsat surface reflectance (LaSRC) data product as a reference target, (2) biophysically relevant basis functions to model the reflectance spectra, (3) an unsupervised classification technique based on natural histogram splitting of these biophysically relevant parameters, and (4) local and multi-temporal anomaly detection. The bootstrap method extends standard processing techniques to remove uneven illumination conditions between flight passes, allowing the creation of radiometrically self-consistent data. Through selective spectral and spatial resampling, LaSRC data is used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from a flight on 06/02/2016 is compared with concurrently collected ground based reflectance spectra as a means of validation achieving an average error of 2.74%. Fitting reflectance spectra using basis functions, based on biophysically relevant spectral features, allows both noise and data reductions while shifting information from spectral bands to biophysical features. Histogram splitting is used to determine a clustering based on natural splittings of these fit parameters. The Indian Pines reference data enabled comparisons of the efficacy of this technique to established techniques. The splitting technique is shown to be an improvement over the ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. This improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA. Three hyperspectral flights over the Kevin Dome area, covering 1843 ha, acquired 06/21/2014, 06/24/2015 and 06/26/2016 are examined with different methods of anomaly detection. Detection of anomalies within a single data set is examined to determine, on a local scale, areas that are significantly different from the surrounding area. Additionally, the detection and identification of persistent anomalies and non-persistent anomalies was investigated across multiple data sets.

  14. Dependence of cerebral-cortex activation in women on environmental factors

    NASA Astrophysics Data System (ADS)

    Pavlov, K. I.; Mukhin, V. N.; Kamenskaya, V. G.; Klimenko, V. M.

    2016-12-01

    The investigation of female physiological reactions to different meteorological conditions and space weather is relevant, since there are little experimental findings in this field. The purpose of this work is to determine how the level of cerebral-cortex activity in women depends on the meteorological and cosmophysical parameters of weather and space processes. We studied electroencephalograms (EEGs) recorded at rest in the sitting position and with eyes closed. We performed four series of measurements of brain bioelectrical activity from February to June 2013. We found that the level of cortical activity recorded by EEG changed significantly during these 6 months. Significant differences were detected between the cortical activity and the parameters of weather and space processes; namely, an increase in the air temperature and a decrease in the wind speed and cosmic-ray energy result in a decrease in the activity rate of the right occipital lobe.

  15. Amplification of a high-frequency electromagnetic wave by a relativistic plasma

    NASA Technical Reports Server (NTRS)

    Yoon, Peter H.

    1990-01-01

    The amplification of a high-frequency transverse electromagnetic wave by a relativistic plasma component, via the synchrotron maser process, is studied. The background plasma that supports the transverse wave is considered to be cold, and the energetic component whose density is much smaller than that of the background component has a loss-cone feature in the perpendicular momentum space and a finite field-aligned drift speed. The ratio of the background plasma frequency squared to the electron gyrofrequency squared is taken to be sufficiently larger than unity. Such a parameter regime is relevant to many space and astrophysical situations. A detailed study of the amplification process is carried out over a wide range of physical parameters including the loss-cone index, the ratio of the electron mass energy to the temperature of the energetic component, the field-aligned drift speed, the normalized density, and the wave propagation angle.

  16. Foraging for brain stimulation: toward a neurobiology of computation.

    PubMed

    Gallistel, C R

    1994-01-01

    The self-stimulating rat performs foraging tasks mediated by simple computations that use interreward intervals and subjective reward magnitudes to determine stay durations. This is a simplified preparation in which to study the neurobiology of the elementary computational operations that make cognition possible, because the neural signal specifying the value of a computationally relevant variable is produced by direct electrical stimulation of a neural pathway. Newly developed measurement methods yield functions relating the subjective reward magnitude to the parameters of the neural signal. These measurements also show that the decision process that governs foraging behavior divides the subjective reward magnitude by the most recent interreward interval to determine the preferability of an option (a foraging patch). The decision process sets the parameters that determine stay durations (durations of visits to foraging patches) so that the ratios of the stay durations match the ratios of the preferabilities.

  17. Clinical Parameters and Tools for Home-Based Assessment of Parkinson's Disease: Results from a Delphi study.

    PubMed

    Ferreira, Joaquim J; Santos, Ana T; Domingos, Josefa; Matthews, Helen; Isaacs, Tom; Duffen, Joy; Al-Jawad, Ahmed; Larsen, Frank; Artur Serrano, J; Weber, Peter; Thoms, Andrea; Sollinger, Stefan; Graessner, Holm; Maetzler, Walter

    2015-01-01

    Parkinson's disease (PD) is a neurodegenerative disorder with fluctuating symptoms. To aid the development of a system to evaluate people with PD (PwP) at home (SENSE-PARK system) there was a need to define parameters and tools to be applied in the assessment of 6 domains: gait, bradykinesia/hypokinesia, tremor, sleep, balance and cognition. To identify relevant parameters and assessment tools of the 6 domains, from the perspective of PwP, caregivers and movement disorders specialists. A 2-round Delphi study was conducted to select a core of parameters and assessment tools to be applied. This process included PwP, caregivers and movement disorders specialists. Two hundred and thirty-three PwP, caregivers and physicians completed the first round questionnaire, and 50 the second. Results allowed the identification of parameters and assessment tools to be added to the SENSE-PARK system. The most consensual parameters were: Falls and Near Falls; Capability to Perform Activities of Daily Living; Interference with Activities of Daily Living; Capability to Process Tasks; and Capability to Recall and Retrieve Information. The most cited assessment strategies included Walkers; the Evaluation of Performance Doing Fine Motor Movements; Capability to Eat; Assessment of Sleep Quality; Identification of Circumstances and Triggers for Loose of Balance and Memory Assessment. An agreed set of measuring parameters, tests, tools and devices was achieved to be part of a system to evaluate PwP at home. A pattern of different perspectives was identified for each stakeholder.

  18. Process monitoring and visualization solutions for hot-melt extrusion: a review.

    PubMed

    Saerens, Lien; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas

    2014-02-01

    Hot-melt extrusion (HME) is applied as a continuous pharmaceutical manufacturing process for the production of a variety of dosage forms and formulations. To ensure the continuity of this process, the quality of the extrudates must be assessed continuously during manufacturing. The objective of this review is to provide an overview and evaluation of the available process analytical techniques which can be applied in hot-melt extrusion. Pharmaceutical extruders are equipped with traditional (univariate) process monitoring tools, observing barrel and die temperatures, throughput, screw speed, torque, drive amperage, melt pressure and melt temperature. The relevance of several spectroscopic process analytical techniques for monitoring and control of pharmaceutical HME has been explored recently. Nevertheless, many other sensors visualizing HME and measuring diverse critical product and process parameters with potential use in pharmaceutical extrusion are available, and were thoroughly studied in polymer extrusion. The implementation of process analytical tools in HME serves two purposes: (1) improving process understanding by monitoring and visualizing the material behaviour and (2) monitoring and analysing critical product and process parameters for process control, allowing to maintain a desired process state and guaranteeing the quality of the end product. This review is the first to provide an evaluation of the process analytical tools applied for pharmaceutical HME monitoring and control, and discusses techniques that have been used in polymer extrusion having potential for monitoring and control of pharmaceutical HME. © 2013 Royal Pharmaceutical Society.

  19. Why the impact of mechanical stimuli on stem cells remains a challenge.

    PubMed

    Goetzke, Roman; Sechi, Antonio; De Laporte, Laura; Neuss, Sabine; Wagner, Wolfgang

    2018-05-04

    Mechanical stimulation affects growth and differentiation of stem cells. This may be used to guide lineage-specific cell fate decisions and therefore opens fascinating opportunities for stem cell biology and regenerative medicine. Several studies demonstrated functional and molecular effects of mechanical stimulation but on first sight these results often appear to be inconsistent. Comparison of such studies is hampered by a multitude of relevant parameters that act in concert. There are notorious differences between species, cell types, and culture conditions. Furthermore, the utilized culture substrates have complex features, such as surface chemistry, elasticity, and topography. Cell culture substrates can vary from simple, flat materials to complex 3D scaffolds. Last but not least, mechanical forces can be applied with different frequency, amplitude, and strength. It is therefore a prerequisite to take all these parameters into consideration when ascribing their specific functional relevance-and to only modulate one parameter at the time if the relevance of this parameter is addressed. Such research questions can only be investigated by interdisciplinary cooperation. In this review, we focus particularly on mesenchymal stem cells and pluripotent stem cells to discuss relevant parameters that contribute to the kaleidoscope of mechanical stimulation of stem cells.

  20. Forming of complex-shaped composite tubes using optimized bladder-assisted resin transfer molding

    NASA Astrophysics Data System (ADS)

    Schillfahrt, Christian; Fauster, Ewald; Schledjewski, Ralf

    2018-05-01

    This work addresses the manufacturing of tubular composite structures by means of bladder-assisted resin transfer molding using elastomeric bladders. In order to achieve successful processing of such parts, knowledge of the compaction and impregnation behavior of the textile preform is vital. Hence, efficient analytical models that describe the influencing parameters of the preform compaction and filling stage were developed and verified through practical experiments. A process window describing optimal and critical operating conditions during the injection stage was created by evaluating the impact of the relevant process pressures on filling time. Finally, a cascaded injection procedure was investigated that particularly facilitates the manufacturing of long composite tubes.

  1. Principles of quantitation of viral loads using nucleic acid sequence-based amplification in combination with homogeneous detection using molecular beacons.

    PubMed

    Weusten, Jos J A M; Carpay, Wim M; Oosterlaken, Tom A M; van Zuijlen, Martien C A; van de Wiel, Paul A

    2002-03-15

    For quantitative NASBA-based viral load assays using homogeneous detection with molecular beacons, such as the NucliSens EasyQ HIV-1 assay, a quantitation algorithm is required. During the amplification process there is a constant growth in the concentration of amplicons to which the beacon can bind while generating a fluorescence signal. The overall fluorescence curve contains kinetic information on both amplicon formation and beacon binding, but only the former is relevant for quantitation. In the current paper, mathematical modeling of the relevant processes is used to develop an equation describing the fluorescence curve as a function of the amplification time and the relevant kinetic parameters. This equation allows reconstruction of RNA formation, which is characterized by an exponential increase in concentrations as long as the primer concentrations are not rate limiting and by linear growth over time after the primer pool is depleted. During the linear growth phase, the actual quantitation is based on assessing the amplicon formation rate from the viral RNA relative to that from a fixed amount of calibrator RNA. The quantitation procedure has been successfully applied in the NucliSens EasyQ HIV-1 assay.

  2. Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.

    PubMed

    Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders

    2013-03-12

    In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  4. Effective ergodicity breaking in an exclusion process with varying system length

    NASA Astrophysics Data System (ADS)

    Schultens, Christoph; Schadschneider, Andreas; Arita, Chikashi

    2015-09-01

    Stochastic processes of interacting particles in systems with varying length are relevant e.g. for several biological applications. We try to explore what kind of new physical effects one can expect in such systems. As an example, we extend the exclusive queueing process that can be viewed as a one-dimensional exclusion process with varying length, by introducing Langmuir kinetics. This process can be interpreted as an effective model for a queue that interacts with other queues by allowing incoming and leaving of customers in the bulk. We find surprising indications for breaking of ergodicity in a certain parameter regime, where the asymptotic growth behavior depends on the initial length. We show that a random walk with site-dependent hopping probabilities exhibits qualitatively the same behavior.

  5. Interference effect between neutron direct and resonance capture reactions for neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Minato, Futoshi; Fukui, Tokuro

    2017-11-01

    Interference effect of neutron capture cross section between the compound and direct processes is investigated. The compound process is calculated by resonance parameters and the direct process by the potential model. The interference effect is tested for neutron-rich 82Ge and 134Sn nuclei relevant to r-process and light nucleus 13C which is neutron poison in the s-process and produces long-lived radioactive nucleus 14C (T1/2 = 5700 y). The interference effects in those nuclei are significant around resonances, and low energy region if s-wave neutron direct capture is possible. Maxwellian averaged cross sections at kT = 30 and 300 keV are also calculated, and the interference effect changes the Maxwellian averaged capture cross section largely depending on resonance position.

  6. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhman, Jonathan; Kozii, Vladyslav; Fu, Liang

    In this work, we study how an inversion-breaking quantum critical point affects the ground state of a one-dimensional electronic liquid with repulsive interaction and spin-orbit coupling. We find that regardless of the interaction strength, the critical fluctuations always lead to a gap in the electronic spin sector. The origin of the gap is a two-particle backscattering process, which becomes relevant due to renormalization of the Luttinger parameter near the critical point. The resulting spin-gapped state is topological and can be considered as a one-dimensional version of a spin-triplet superconductor. Interestingly, in the case of a ferromagnetic critical point, the Luttingermore » parameter is renormalized in the opposite manner, such that the system remains nonsuperconducting.« less

  8. Local operators in kinetic wealth distribution

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2016-05-01

    The statistical mechanics approach to wealth distribution is based on the conservative kinetic multi-agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. Here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. We also show numerically that in order to generate the power-law tail, an heterogeneous risk aversion model is required. By changing the parameters of these operators, one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.

  9. Quantum control of isomerization by robust navigation in the energy spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murgida, G. E., E-mail: murgida@tandar.cnea.gov.ar; Arranz, F. J., E-mail: fj.arranz@upm.es; Borondo, F., E-mail: f.borondo@uam.es

    2015-12-07

    In this paper, we present a detailed study on the application of the quantum control technique of navigation in the energy spectrum to chemical isomerization processes, namely, CN–Li⇆ Li–CN. This technique is based on the controlled time variation of a Hamiltonian parameter, an external uniform electric field in our case. The main result of our work establishes that the navigation involved in the method is robust, in the sense that quite sizable deviations from a pre-established control parameter time profile can be introduced and still get good final results. This is specially relevant thinking of a experimental implementation of themore » method.« less

  10. Modeling Single Well Injection-Withdrawal (SWIW) Tests for Characterization of Complex Fracture-Matrix Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotte, F.P.; Doughty, C.; Birkholzer, J.

    2010-11-01

    The ability to reliably predict flow and transport in fractured porous rock is an essential condition for performance evaluation of geologic (underground) nuclear waste repositories. In this report, a suite of programs (TRIPOLY code) for calculating and analyzing flow and transport in two-dimensional fracture-matrix systems is used to model single-well injection-withdrawal (SWIW) tracer tests. The SWIW test, a tracer test using one well, is proposed as a useful means of collecting data for site characterization, as well as estimating parameters relevant to tracer diffusion and sorption. After some specific code adaptations, we numerically generated a complex fracture-matrix system for computationmore » of steady-state flow and tracer advection and dispersion in the fracture network, along with solute exchange processes between the fractures and the porous matrix. We then conducted simulations for a hypothetical but workable SWIW test design and completed parameter sensitivity studies on three physical parameters of the rock matrix - namely porosity, diffusion coefficient, and retardation coefficient - in order to investigate their impact on the fracture-matrix solute exchange process. Hydraulic fracturing, or hydrofracking, is also modeled in this study, in two different ways: (1) by increasing the hydraulic aperture for flow in existing fractures and (2) by adding a new set of fractures to the field. The results of all these different tests are analyzed by studying the population of matrix blocks, the tracer spatial distribution, and the breakthrough curves (BTCs) obtained, while performing mass-balance checks and being careful to avoid some numerical mistakes that could occur. This study clearly demonstrates the importance of matrix effects in the solute transport process, with the sensitivity studies illustrating the increased importance of the matrix in providing a retardation mechanism for radionuclides as matrix porosity, diffusion coefficient, or retardation coefficient increase. Interestingly, model results before and after hydrofracking are insensitive to adding more fractures, while slightly more sensitive to aperture increase, making SWIW tests a possible means of discriminating between these two potential hydrofracking effects. Finally, we investigate the possibility of inferring relevant information regarding the fracture-matrix system physical parameters from the BTCs obtained during SWIW testing.« less

  11. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  12. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  13. Deoxynivalenol & Deoxynivalenol-3-Glucoside Mitigation through Bakery Production Strategies: Effective Experimental Design within Industrial Rusk-Making Technology

    PubMed Central

    Generotti, Silvia; Cirlini, Martina; Malachova, Alexandra; Sulyok, Michael; Berthiller, Franz; Dall’Asta, Chiara; Suman, Michele

    2015-01-01

    In the scientific field, there is a progressive awareness about the potential implications of food processing on mycotoxins especially concerning thermal treatments. High temperatures may cause, in fact, transformation or degradation of these compounds. This work is aimed to study the fate of mycotoxins during bakery processing, focusing on deoxynivalenol (DON) and deoxynivalenol-3-glucoside (DON3Glc), along the chain of industrial rusk production. Starting from naturally contaminated bran, we studied how concentrations of DON and DON3Glc are influenced by modifying ingredients and operative conditions. The experiments were performed using statistical Design of Experiment (DoE) schemes to synergistically explore the relationship between mycotoxin reduction and the indicated processing transformation parameters. All samples collected during pilot plant experiments were analyzed with an LC-MS/MS multimycotoxin method. The obtained model shows a good fitting, giving back relevant information in terms of optimization of the industrial production process, in particular suggesting that time and temperature in baking and toasting steps are highly relevant for minimizing mycotoxin level in rusks. A reduction up to 30% for DON and DON3Glc content in the finished product was observed within an acceptable technological range. PMID:26213969

  14. Deoxynivalenol & Deoxynivalenol-3-Glucoside Mitigation through Bakery Production Strategies: Effective Experimental Design within Industrial Rusk-Making Technology.

    PubMed

    Generotti, Silvia; Cirlini, Martina; Malachova, Alexandra; Sulyok, Michael; Berthiller, Franz; Dall'Asta, Chiara; Suman, Michele

    2015-07-24

    In the scientific field, there is a progressive awareness about the potential implications of food processing on mycotoxins especially concerning thermal treatments. High temperatures may cause, in fact, transformation or degradation of these compounds. This work is aimed to study the fate of mycotoxins during bakery processing, focusing on deoxynivalenol (DON) and deoxynivalenol-3-glucoside (DON3Glc), along the chain of industrial rusk production. Starting from naturally contaminated bran, we studied how concentrations of DON and DON3Glc are influenced by modifying ingredients and operative conditions. The experiments were performed using statistical Design of Experiment (DoE) schemes to synergistically explore the relationship between mycotoxin reduction and the indicated processing transformation parameters. All samples collected during pilot plant experiments were analyzed with an LC-MS/MS multimycotoxin method. The obtained model shows a good fitting, giving back relevant information in terms of optimization of the industrial production process, in particular suggesting that time and temperature in baking and toasting steps are highly relevant for minimizing mycotoxin level in rusks. A reduction up to 30% for DON and DON3Glc content in the finished product was observed within an acceptable technological range.

  15. Translational Rodent Paradigms to Investigate Neuromechanisms Underlying Behaviors Relevant to Amotivation and Altered Reward Processing in Schizophrenia

    PubMed Central

    Young, Jared W.; Markou, Athina

    2015-01-01

    Amotivation and reward-processing deficits have long been described in patients with schizophrenia and considered large contributors to patients’ inability to integrate well in society. No effective treatments exist for these symptoms, partly because the neuromechanisms mediating such symptoms are poorly understood. Here, we propose a translational neuroscientific approach that can be used to assess reward/motivational deficits related to the negative symptoms of schizophrenia using behavioral paradigms that can also be conducted in experimental animals. By designing and using objective laboratory behavioral tools that are parallel in their parameters in rodents and humans, the neuromechanisms underlying behaviors with relevance to these symptoms of schizophrenia can be investigated. We describe tasks that measure the motivation of rodents to expend physical and cognitive effort to gain rewards, as well as probabilistic learning tasks that assess both reward learning and feedback-based decision making. The latter tasks are relevant because of demonstrated links of performance deficits correlating with negative symptoms in patients with schizophrenia. These tasks utilize operant techniques in order to investigate neural circuits targeting a specific domain across species. These tasks therefore enable the development of insights into altered mechanisms leading to negative symptom-relevant behaviors in patients with schizophrenia. Such findings will then enable the development of targeted treatments for these altered neuromechanisms and behaviors seen in schizophrenia. PMID:26194891

  16. The Design of PSB-VVER Experiments Relevant to Accident Management

    NASA Astrophysics Data System (ADS)

    Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.

  17. Evolving Spiking Neural Networks for Recognition of Aged Voices.

    PubMed

    Silva, Marco; Vellasco, Marley M B R; Cataldo, Edson

    2017-01-01

    The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. This is a relevant problem to those people who use their voices professionally, and its early identification can help determine a suitable treatment to avoid its progress or even to eliminate the problem. This work focuses on the development of a new model for the identification of aging voices (independently of their chronological age), using as input attributes parameters extracted from the voice and glottal signals. The proposed model, named Quantum binary-real evolving Spiking Neural Network (QbrSNN), is based on spiking neural networks (SNNs), with an unsupervised training algorithm, and a Quantum-Inspired Evolutionary Algorithm that automatically determines the most relevant attributes and the optimal parameters that configure the SNN. The QbrSNN model was evaluated in a database composed of 120 records, containing samples from three groups of speakers. The results obtained indicate that the proposed model provides better accuracy than other approaches, with fewer input attributes. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. A Range Finding Protocol to Support Design for Transcriptomics Experimentation: Examples of In-Vitro and In-Vivo Murine UV Exposure

    PubMed Central

    van Oostrom, Conny T.; Jonker, Martijs J.; de Jong, Mark; Dekker, Rob J.; Rauwerda, Han; Ensink, Wim A.; de Vries, Annemieke; Breit, Timo M.

    2014-01-01

    In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: in-vitro UV-C exposure of mouse embryonic fibroblasts and in-vivo UV-B exposure of mouse skin. Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental ‘sweet spot’ by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the ‘sweet spot’ for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization. PMID:24823911

  19. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  20. Human factors and safety in emergency medicine

    NASA Technical Reports Server (NTRS)

    Schaefer, H. G.; Helmreich, R. L.; Scheidegger, D.

    1994-01-01

    A model based on an input process and outcome conceptualisation is suggested to address safety-relevant factors in emergency medicine. As shown in other dynamic and demanding environments, human factors play a decisive role in attaining high quality service. Attitudes held by health-care providers, organisational shells and work-cultural parameters determine communication, conflict resolution and workload distribution within and between teams. These factors should be taken into account to improve outcomes such as operational integrity, job satisfaction and morale.

  1. Simulation of plasma loading of high-pressure RF cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, K.; Samulyak, R.; Yonehara, K.

    2018-01-11

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.

  2. Using Poisson-gamma model to evaluate the duration of recruitment process when historical trials are available.

    PubMed

    Minois, Nathan; Lauwers-Cances, Valérie; Savy, Stéphanie; Attal, Michel; Andrieu, Sandrine; Anisimov, Vladimir; Savy, Nicolas

    2017-10-15

    At the design of clinical trial operation, a question of a paramount interest is how long it takes to recruit a given number of patients. Modelling the recruitment dynamics is the necessary step to answer this question. Poisson-gamma model provides very convenient, flexible and realistic approach. This model allows predicting the trial duration using data collected at an interim time with very good accuracy. A natural question arises: how to evaluate the parameters of recruitment model before the trial begins? The question is harder to handle as there are no recruitment data available for this trial. However, if there exist similar completed trials, it is appealing to use data from these trials to investigate feasibility of the recruitment process. In this paper, the authors explore the recruitment data of two similar clinical trials (Intergroupe Francais du Myélome 2005 and 2009). It is shown that the natural idea of plugging the historical rates estimated from the completed trial in the same centres of the new trial for predicting recruitment is not a relevant strategy. In contrast, using the parameters of a gamma distribution of the rates estimated from the completed trial in the recruitment dynamic model of the new trial provides reasonable predictive properties with relevant confidence intervals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.

    2009-01-01

    Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.

  4. Integrating retention soil filters into urban hydrologic models - Relevant processes and important parameters

    NASA Astrophysics Data System (ADS)

    Bachmann-Machnik, Anna; Meyer, Daniel; Waldhoff, Axel; Fuchs, Stephan; Dittmer, Ulrich

    2018-04-01

    Retention Soil Filters (RSFs), a form of vertical flow constructed wetlands specifically designed for combined sewer overflow (CSO) treatment, have proven to be an effective tool to mitigate negative impacts of CSOs on receiving water bodies. Long-term hydrologic simulations are used to predict the emissions from urban drainage systems during planning of stormwater management measures. So far no universally accepted model for RSF simulation exists. When simulating hydraulics and water quality in RSFs, an appropriate level of detail must be chosen for reasonable balancing between model complexity and model handling, considering the model input's level of uncertainty. The most crucial parameters determining the resultant uncertainties of the integrated sewer system and filter bed model were identified by evaluating a virtual drainage system with a Retention Soil Filter for CSO treatment. To determine reasonable parameter ranges for RSF simulations, data of 207 events from six full-scale RSF plants in Germany were analyzed. Data evaluation shows that even though different plants with varying loading and operation modes were examined, a simple model is sufficient to assess relevant suspended solids (SS), chemical oxygen demand (COD) and NH4 emissions from RSFs. Two conceptual RSF models with different degrees of complexity were assessed. These models were developed based on evaluation of data from full scale RSF plants and column experiments. Incorporated model processes are ammonium adsorption in the filter layer and degradation during subsequent dry weather period, filtration of SS and particulate COD (XCOD) to a constant background concentration and removal of solute COD (SCOD) by a constant removal rate during filter passage as well as sedimentation of SS and XCOD in the filter overflow. XCOD, SS and ammonium loads as well as ammonium concentration peaks are discharged primarily via RSF overflow not passing through the filter bed. Uncertainties of the integrated simulation of the sewer system and RSF model mainly originate from the model parameters of the hydrologic sewer system model.

  5. Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.

    PubMed

    MacMartin, Douglas G; Tziperman, Eli

    2014-09-08

    Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.

  6. The relationship between poor sleep and inhibitory functions indicated by event-related potentials.

    PubMed

    Breimhorst, Markus; Falkenstein, Michael; Marks, Anke; Griefahn, Barbara

    2008-06-01

    The present study focused on the relationship between normal variations of sleep and inhibitory functions as reflected in event-related potentials. For this reason one night of 21 healthy participants was analysed. After waking up all participants completed a visual Go/Nogo task. On the basis of a sleep disturbance index (SDI) the participants were separated into 8 SDI-good and 13 SDI-poor sleepers using a cluster analysis. The results showed that Nogo-N2 amplitude was smaller and Nogo-P3 latency longer in SDI-poor sleepers. Moreover, Go-P3 amplitude was smaller in SDI-poor sleepers. Performance parameters were not influenced by poor sleep. We concluded that poor sleep specifically affects the intensity of pre-motor inhibitory processes (Nogo-N2 amplitude), the speed to inhibit a motor response (Nogo-P3 latency) and the intensity of task-relevant information processing (Go-P3 amplitude). In further studies, it should be explored under which conditions such subliminal deficits also become relevant for overt behaviour.

  7. Charge transport in organic semiconductors.

    PubMed

    Bässler, Heinz; Köhler, Anna

    2012-01-01

    Modern optoelectronic devices, such as light-emitting diodes, field-effect transistors and organic solar cells require well controlled motion of charges for their efficient operation. The understanding of the processes that determine charge transport is therefore of paramount importance for designing materials with improved structure-property relationships. Before discussing different regimes of charge transport in organic semiconductors, we present a brief introduction into the conceptual framework in which we interpret the relevant photophysical processes. That is, we compare a molecular picture of electronic excitations against the Su-Schrieffer-Heeger semiconductor band model. After a brief description of experimental techniques needed to measure charge mobilities, we then elaborate on the parameters controlling charge transport in technologically relevant materials. Thus, we consider the influences of electronic coupling between molecular units, disorder, polaronic effects and space charge. A particular focus is given to the recent progress made in understanding charge transport on short time scales and short length scales. The mechanism for charge injection is briefly addressed towards the end of this chapter.

  8. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  9. Dietary supplementation of tiger nut alters biochemical parameters relevant to erectile function in l-NAME treated rats.

    PubMed

    Olabiyi, Ayodeji A; Carvalho, Fabiano B; Bottari, Nathieli B; Lopes, Thauan F; da Costa, Pauline; Stefanelo, Naiara; Morsch, Vera M; Akindahunsi, Afolabi A; Oboh, Ganiyu; Schetinger, Maria Rosa

    2018-07-01

    Tiger nut tubers have been reportedly used for the treatment of erectile dysfunction (ED) in folk medicine without scientific basis. Hence, this study evaluated the effect of tiger nut on erectile dysfunction by assessing biochemical parameters relevant to ED in male rats by nitric oxide synthase (NOS) inhibitor, Nω-nitro-l-arginine methyl ester hydrochloride (l-NAME) treatment. Rats were divided into five groups (n = 10) each: Control group; l-NAME plus basal diet; l-NAME plus Sildenafil citrate; diet supplemented processed tiger nut (20%) plus l-NAME;diet supplemented raw tiger nut (20%) plus l-NAME. l-NAME pre-treatment (40 mg/kg/day) lasted for 14 days. Arginase, acetycholinesterase (AChE) and adenosine deaminase (ADA) activities as well as nitric oxide levels (NO) in serum, brain and penile tissue were measured. l-NAME increased the activity of arginase, AChE and ADA and reduced NO levels. However, dietary supplementation with tiger nut caused a reduction on the activities of the above enzymes and up regulated nitric oxide levels when compared to the control group. The effect of tiger nut supplemented diet may be said to prevent alterations of the activities of the enzymes relevant in erectile function. Quercetin was revealed to be the most active component of tiger nut tuber by HPLC finger printing. Copyright © 2018. Published by Elsevier Ltd.

  10. Determination of Littlest Higgs Model Parameters at the ILC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conley, John A.; Hewett, JoAnne; Le, My Phuong

    2005-07-27

    We examine the effects of the extended gauge sector of the Littlest Higgs model in high energy e{sup +}e{sup -} collisions. We find that the search reach in e{sup +}e{sup -} {yields} f{bar f} at a {radical}s = 500 GeV International Linear Collider covers essentially the entire parameter region where the Littlest Higgs model is relevant to the gauge hierarchy problem. In addition, we show that this channel provides an accurate determination of the fundamental model parameters, to the precision of a few percent, provided that the LHC measures the mass of the heavy neutral gauge .eld. Additionally, we showmore » that the couplings of the extra gauge bosons to the light Higgs can be observed from the process e{sup +}e{sup -} {yields} Zh for a significant region of the parameter space. This allows for confirmation of the structure of the cancellation of the Higgs mass quadratic divergence and would verify the little Higgs mechanism.« less

  11. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  12. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  13. FAST TRACK COMMUNICATION: Attosecond correlation dynamics during electron tunnelling from molecules

    NASA Astrophysics Data System (ADS)

    Walters, Zachary B.; Smirnova, Olga

    2010-08-01

    In this communication, we present an analytical theory of strong-field ionization of molecules, which takes into account the rearrangement of multiple interacting electrons during the ionization process. We show that such rearrangement offers an alternative pathway to the ionization of orbitals more deeply bound than the highest occupied molecular orbital. This pathway is not subject to the full exponential suppression characteristic of direct tunnel ionization from the deeper orbitals. The departing electron produces an 'attosecond correlation pulse' which controls the rearrangement during the tunnelling process. The shape and duration of this pulse are determined by the electronic structure of the relevant states, molecular orientation and laser parameters.

  14. Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance

    PubMed Central

    Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy

    2011-01-01

    The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021

  15. Classifying the Basic Parameters of Ultraviolet Copper Bromide Laser

    NASA Astrophysics Data System (ADS)

    Gocheva-Ilieva, S. G.; Iliev, I. P.; Temelkov, K. A.; Vuchkov, N. K.; Sabotinov, N. V.

    2009-10-01

    The performance of deep ultraviolet copper bromide lasers is of great importance because of their applications in medicine, microbiology, high-precision processing of new materials, high-resolution laser lithography in microelectronics, high-density optical recording of information, laser-induced fluorescence in plasma and wide-gap semiconductors and more. In this paper we present a statistical study on the classification of 12 basic lasing parameters, by using different agglomerative methods of cluster analysis. The results are based on a big amount of experimental data for UV Cu+ Ne-CuBr laser with wavelengths 248.6 nm, 252.9 nm, 260.0 nm and 270.3 nm, obtained in Georgi Nadjakov Institute of Solid State Physics, Bulgarian Academy of Sciences. The relevant influence of parameters on laser generation is also evaluated. The results are applicable in computer modeling and planning the experiments and further laser development with improved output characteristics.

  16. Accuracy in planar cutting of bones: an ISO-based evaluation.

    PubMed

    Cartiaux, Olivier; Paul, Laurent; Docquier, Pierre-Louis; Francq, Bernard G; Raucent, Benoît; Dombre, Etienne; Banse, Xavier

    2009-03-01

    Computer- and robot-assisted technologies are capable of improving the accuracy of planar cutting in orthopaedic surgery. This study is a first step toward formulating and validating a new evaluation methodology for planar bone cutting, based on the standards from the International Organization for Standardization. Our experimental test bed consisted of a purely geometrical model of the cutting process around a simulated bone. Cuts were performed at three levels of surgical assistance: unassisted, computer-assisted and robot-assisted. We measured three parameters of the standard ISO1101:2004: flatness, parallelism and location of the cut plane. The location was the most relevant parameter for assessing cutting errors. The three levels of assistance were easily distinguished using the location parameter. Our ISO methodology employs the location to obtain all information about translational and rotational cutting errors. Location may be used on any osseous structure to compare the performance of existing assistance technologies.

  17. Automatic detection of slight parameter changes associated to complex biomedical signals using multiresolution q-entropy1.

    PubMed

    Torres, M E; Añino, M M; Schlotthauer, G

    2003-12-01

    It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.

  18. Sleep mechanisms: Sleep deprivation and detection of changing levels of consciousness

    NASA Technical Reports Server (NTRS)

    Dement, W. C.; Barchas, J. D.

    1972-01-01

    An attempt was made to obtain information relevant to assessing the need to sleep and make up for lost sleep. Physiological and behavioral parameters were used as measuring parameters. Sleep deprivation in a restricted environment, derivation of data relevant to determining sleepiness from EEG, and the development of the Sanford Sleepiness Scale were discussed.

  19. Permafrost on Mars: distribution, formation, and geological role

    NASA Technical Reports Server (NTRS)

    Nummedal, D.

    1984-01-01

    The morphology of channels, valleys, chaotic and fretted terrains and many smaller features on Mars is consistent with the hypothesis that localized deterioration of thick layers of ice-rich permafrost was a dominant geologic process on the Martian surface. Such ground ice deterioration gave rise to large-scale mass movement, including sliding, slumping and sediment gravity flowage, perhaps also catastropic floods. In contrast to Earth, such mass movement processes on Mars lack effective competition from erosion by surface runoff. Therefore, Martian features due to mass movement grew to reach immense size without being greatly modified by secondary erosional processes. The Viking Mission to Mars in 1976 provided adequate measurements of the relevant physical parameters to constrain models for Martian permafrost.

  20. Numerical Simulation of Aerogasdynamics Processes in A Longwall Panel for Estimation of Spontaneous Combustion Hazards

    NASA Astrophysics Data System (ADS)

    Meshkov, Sergey; Sidorenko, Andrey

    2017-11-01

    The relevance of a solution of the problem of endogenous fire safety in seams liable to self-ignition is shown. The possibilities of numerical methods of researches of gasdynamic processes are considered. The analysis of methodical approaches with the purpose to create models and carry out numerical researches of aerogasdynamic processes in longwall panels of gas mines is made. Parameters of the gob for longwall mining are considered. The significant influence of geological and mining conditions of conducting mining operations on distribution of air streams on longwall panels and effective management of gas emission is shown. The aerogasdynamic model of longwall panels for further research of influence of parameters of ventilation and properties of gob is presented. The results of numerical researches including distribution of air streams, fields of concentration of methane and oxygen at application of various schemes of airing for conditions of perspective mines of the Pechora basin and Kuzbass are given. Recommendations for increase of efficiency of the coal seams mining liable to selfignition are made. The directions of further researches are defined.

  1. The application of a multi-parameter analysis in choosing the location of a new solid waste landfill in Serbia.

    PubMed

    Milosevic, Igor; Naunovic, Zorana

    2013-10-01

    This article presents a process of evaluation and selection of the most favourable location for a sanitary landfill facility from three alternative locations, by applying a multi-criteria decision-making (MCDM) method. An incorrect choice of location for a landfill facility can have a significant negative economic and environmental impact, such as the pollution of air, ground and surface waters. The aim of this article is to present several improvements in the practical process of landfill site selection using the VIKOR MCDM compromise ranking method integrated with a fuzzy analytic hierarchy process approach for determining the evaluation criteria weighing coefficients. The VIKOR method focuses on ranking and selecting from a set of alternatives in the presence of conflicting and non-commensurable (different units) criteria, and on proposing a compromise solution that is closest to the ideal solution. The work shows that valuable site ranking lists can be obtained using the VIKOR method, which is a suitable choice when there is a large number of relevant input parameters.

  2. Improving Image Drizzling in the HST Archive: Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Hoffmann, Samantha L.; Avila, Roberto J.

    2017-06-01

    The Mikulski Archive for Space Telescopes (MAST) pipeline performs geometric distortion corrections, associated image combinations, and cosmic ray rejections with AstroDrizzle on Hubble Space Telescope (HST) data. The MDRIZTAB reference table contains a list of relevant parameters that controls this program. This document details our photometric analysis of Advanced Camera for Surveys Wide Field Channel (ACS/WFC) data processed by AstroDrizzle. Based on this analysis, we update the MDRIZTAB table to improve the quality of the drizzled products delivered by MAST.

  3. Langmuir turbulence driven by beams in solar wind plasmas with long wavelength density fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krafft, C., E-mail: catherine.krafft@u-psud.fr; Universite´ Paris Sud, 91405 Orsay Cedex; Volokitin, A., E-mail: a.volokitin@mail.ru

    2016-03-25

    The self-consistent evolution of Langmuir turbulence generated by electron beams in solar wind plasmas with density inhomogeneities is calculated by numerical simulations based on a 1D Hamiltonian model. It is shown, owing to numerical simulations performed with parameters relevant to type III solar bursts’ conditions at 1 AU, that the presence of long-wavelength random density fluctuations of sufficiently large average level crucially modifies the well-known process of beam interaction with Langmuir waves in homogeneous plasmas.

  4. Quantum Speed Limit of a Photon under Non-Markovian Dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Zhen-Yu; Zhu, Shi-Qun

    2014-02-01

    Quantum speed limit (QSL) time under noise has drawn considerable attention in real quantum computational processes. Though non-Markovian noise is found to be able to accelerate quantum evolution for a damped Jaynes—Cummings model, in this work we show that non-Markovianity will slow down the quantum evolution of an experimentally controllable photon system. As an application, QSL time of a photon can be controlled by regulating the relevant environment parameter properly, which nearly reaches the currently available photonic experimental technology.

  5. Microeconomics of 300-mm process module control

    NASA Astrophysics Data System (ADS)

    Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.

    2001-08-01

    Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.

  6. Improving Information Exchange in the Chicken Processing Sector Using Standardised Data Lists

    NASA Astrophysics Data System (ADS)

    Donnelly, Kathryn Anne-Marie; van der Roest, Joop; Höskuldsson, Stefán Torfi; Olsen, Petter; Karlsen, Kine Mari

    Research has shown that to improve electronic communication between companies, universal standardised data lists are necessary. In food supply chains in particular there is an increased need to exchange data in the wake of food safety incidents. Food supply chain companies already record numerous measurements, properties and parameters. These records are necessary for legal reasons, labelling, traceability, profiling desirable characteristics, showing compliance and for meeting customer requirements. Universal standards for name and content of each of these data elements would improve information exchange between buyers, sellers, authorities, consumers and other interested parties. A case study, carried out for the chicken sector, attempted to identify the most relevant parameters including which of these were already communicated to external bodies.

  7. Edge instability in incompressible planar active fluids

    NASA Astrophysics Data System (ADS)

    Nesbitt, David; Pruessner, Gunnar; Lee, Chiu Fan

    2017-12-01

    Interfacial instability is highly relevant to many important biological processes. A key example arises in wound healing experiments, which observe that an epithelial layer with an initially straight edge does not heal uniformly. We consider the phenomenon in the context of active fluids. Improving upon the approximation used by Zimmermann, Basan, and Levine [Eur. Phys. J.: Spec. Top. 223, 1259 (2014), 10.1140/epjst/e2014-02189-7], we perform a linear stability analysis on a two-dimensional incompressible hydrodynamic model of an active fluid with an open interface. We categorize the stability of the model and find that for experimentally relevant parameters, fingering instability is always absent in this minimal model. Our results point to the crucial role of density variation in the fingering instability in tissue regeneration.

  8. Optimized model tuning in medical systems.

    PubMed

    Kléma, Jirí; Kubalík, Jirí; Lhotská, Lenka

    2005-12-01

    In medical systems it is often advantageous to utilize specific problem situations (cases) in addition to or instead of a general model. Decisions are then based on relevant past cases retrieved from a case memory. The reliability of such decisions depends directly on the ability to identify cases of practical relevance to the current situation. This paper discusses issues of automated tuning in order to obtain a proper definition of mutual case similarity in a specific medical domain. The main focus is on a reasonably time-consuming optimization of the parameters that determine case retrieval and further utilization in decision making/ prediction. The two case studies - mortality prediction after cardiological intervention, and resource allocation at a spa - document that the optimization process is influenced by various characteristics of the problem domain.

  9. Kinetic modeling in PET imaging of hypoxia

    PubMed Central

    Li, Fan; Joergensen, Jesper T; Hansen, Anders E; Kjaer, Andreas

    2014-01-01

    Tumor hypoxia is associated with increased therapeutic resistance leading to poor treatment outcome. Therefore the ability to detect and quantify intratumoral oxygenation could play an important role in future individual personalized treatment strategies. Positron Emission Tomography (PET) can be used for non-invasive mapping of tissue oxygenation in vivo and several hypoxia specific PET tracers have been developed. Evaluation of PET data in the clinic is commonly based on visual assessment together with semiquantitative measurements e.g. standard uptake value (SUV). However, dynamic PET contains additional valuable information on the temporal changes in tracer distribution. Kinetic modeling can be used to extract relevant pharmacokinetic parameters of tracer behavior in vivo that reflects relevant physiological processes. In this paper, we review the potential contribution of kinetic analysis for PET imaging of hypoxia. PMID:25250200

  10. Process characterization and Design Space definition.

    PubMed

    Hakemeyer, Christian; McKnight, Nathan; St John, Rick; Meier, Steven; Trexler-Schmidt, Melody; Kelley, Brian; Zettl, Frank; Puskeiler, Robert; Kleinjans, Annika; Lim, Fred; Wurth, Christine

    2016-09-01

    Quality by design (QbD) is a global regulatory initiative with the goal of enhancing pharmaceutical development through the proactive design of pharmaceutical manufacturing process and controls to consistently deliver the intended performance of the product. The principles of pharmaceutical development relevant to QbD are described in the ICH guidance documents (ICHQ8-11). An integrated set of risk assessments and their related elements developed at Roche/Genentech were designed to provide an overview of product and process knowledge for the production of a recombinant monoclonal antibody (MAb). This chapter describes the tools used for the characterization and validation of MAb manufacturing process under the QbD paradigm. This comprises risk assessments for the identification of potential Critical Process Parameters (pCPPs), statistically designed experimental studies as well as studies assessing the linkage of the unit operations. Outcome of the studies is the classification of process parameters according to their criticality and the definition of appropriate acceptable ranges of operation. The process and product knowledge gained in these studies can lead to the approval of a Design Space. Additionally, the information gained in these studies are used to define the 'impact' which the manufacturing process can have on the variability of the CQAs, which is used to define the testing and monitoring strategy. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  11. Magneto-thermal reconnection of significance to space and astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coppi, B., E-mail: coppi@psfc.mit.edu

    Magnetic reconnection processes that can be excited in collisionless plasma regimes are of interest to space and astrophysics to the extent that the layers in which reconnection takes place are not rendered unrealistically small by their unfavorable dependence on relevant macroscopic distances. The equations describing new modes producing magnetic reconnection over relatively small but significant distances, unlike tearing types of mode, even when dealing with large macroscopic scale lengths, are given. The considered modes are associated with a finite electron temperature gradient and have a phase velocity in the direction of the electron diamagnetic velocity that can reverse to themore » opposite direction as relevant parameters are varied over a relatively wide range. The electron temperature perturbation has a primary role in the relevant theory. In particular, when referring to regimes in which the longitudinal (to the magnetic field) electron thermal conductivity is relatively large, the electron temperature perturbation becomes singular if the ratio of the transverse to the longitudinal electron thermal conductivity becomes negligible.« less

  12. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  13. Phosphatidylcholine Membrane Fusion Is pH-Dependent.

    PubMed

    Akimov, Sergey A; Polynkin, Michael A; Jiménez-Munguía, Irene; Pavlov, Konstantin V; Batishchev, Oleg V

    2018-05-03

    Membrane fusion mediates multiple vital processes in cell life. Specialized proteins mediate the fusion process, and a substantial part of their energy is used for topological rearrangement of the membrane lipid matrix. Therefore, the elastic parameters of lipid bilayers are of crucial importance for fusion processes and for determination of the energy barriers that have to be crossed for the process to take place. In the case of fusion of enveloped viruses (e.g., influenza) with endosomal membrane, the interacting membranes are in an acidic environment, which can affect the membrane's mechanical properties. This factor is often neglected in the analysis of virus-induced membrane fusion. In the present work, we demonstrate that even for membranes composed of zwitterionic lipids, changes of the environmental pH in the physiologically relevant range of 4.0 to 7.5 can affect the rate of the membrane fusion notably. Using a continual model, we demonstrated that the key factor defining the height of the energy barrier is the spontaneous curvature of the lipid monolayer. Changes of this parameter are likely to be caused by rearrangements of the polar part of lipid molecules in response to changes of the pH of the aqueous solution bathing the membrane.

  14. SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    2014-06-15

    Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less

  15. Geostatistical characterisation of geothermal parameters for a thermal aquifer storage site in Germany

    NASA Astrophysics Data System (ADS)

    Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.

    2009-04-01

    The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.

  16. Modeling of Mitochondria Bioenergetics Using a Composable Chemiosmotic Energy Transduction Rate Law: Theory and Experimental Validation

    PubMed Central

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html. PMID:21931590

  17. Modeling of mitochondria bioenergetics using a composable chemiosmotic energy transduction rate law: theory and experimental validation.

    PubMed

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html.

  18. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  19. Adaptive Memory: Evaluating Alternative Forms of Fitness-Relevant Processing in the Survival Processing Paradigm

    PubMed Central

    Sandry, Joshua; Trafimow, David; Marks, Michael J.; Rice, Stephen

    2013-01-01

    Memory may have evolved to preserve information processed in terms of its fitness-relevance. Based on the assumption that the human mind comprises different fitness-relevant adaptive mechanisms contributing to survival and reproductive success, we compared alternative fitness-relevant processing scenarios with survival processing. Participants rated words for relevancy to fitness-relevant and control conditions followed by a delay and surprise recall test (Experiment 1a). Participants recalled more words processed for their relevance to a survival situation. We replicated these findings in an online study (Experiment 2) and a study using revised fitness-relevant scenarios (Experiment 3). Across all experiments, we did not find a mnemonic benefit for alternative fitness-relevant processing scenarios, questioning assumptions associated with an evolutionary account of remembering. Based on these results, fitness-relevance seems to be too wide-ranging of a construct to account for the memory findings associated with survival processing. We propose that memory may be hierarchically sensitive to fitness-relevant processing instructions. We encourage future researchers to investigate the underlying mechanisms responsible for survival processing effects and work toward developing a taxonomy of adaptive memory. PMID:23585858

  20. Structural properties of templated Ge quantum dot arrays: impact of growth and pre-pattern parameters

    NASA Astrophysics Data System (ADS)

    Tempeler, J.; Danylyuk, S.; Brose, S.; Loosen, P.; Juschkin, L.

    2018-07-01

    In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 1010 cm‑2.

  1. Ultrasensitive investigations of biological systems by fluorescence correlation spectroscopy.

    PubMed

    Haustein, Elke; Schwille, Petra

    2003-02-01

    Fluorescence correlation spectroscopy (FCS) extracts information about molecular dynamics from the tiny fluctuations that can be observed in the emission of small ensembles of fluorescent molecules in thermodynamic equilibrium. Employing a confocal setup in conjunction with highly dilute samples, the average number of fluorescent particles simultaneously within the measurement volume (approximately 1 fl) is minimized. Among the multitude of chemical and physical parameters accessible by FCS are local concentrations, mobility coefficients, rate constants for association and dissociation processes, and even enzyme kinetics. As any reaction causing an alteration of the primary measurement parameters such as fluorescence brightness or mobility can be monitored, the application of this noninvasive method to unravel processes in living cells is straightforward. Due to the high spatial resolution of less than 0.5 microm, selective measurements in cellular compartments, e.g., to probe receptor-ligand interactions on cell membranes, are feasible. Moreover, the observation of local molecular dynamics provides access to environmental parameters such as local oxygen concentrations, pH, or viscosity. Thus, this versatile technique is of particular attractiveness for researchers striving for quantitative assessment of interactions and dynamics of small molecular quantities in biologically relevant systems.

  2. Laser tailored nanoparticle arrays to detect molecules at dilute concentration

    NASA Astrophysics Data System (ADS)

    Zanchi, Chiara; Lucotti, Andrea; Tommasini, Matteo; Trusso, Sebastiano; de Grazia, Ugo; Ciusani, Emilio; Ossi, Paolo M.

    2017-02-01

    By nanosecond pulsed laser ablation in an ambient gas gold nanoparticles (NPs) were produced that self-assemble on a substrate resulting in increasingly elaborated architectures of growing thickness, from isolated NP arrays up to percolated films. NPs nucleate and grow in the plasma plume propagating through the gas. Process parameters including laser wavelength, laser energy density, target to substrate distance, nature and pressure of the gas affect plasma expansion, thus asymptotic NP size and kinetic energy. NP size, energy and mobility at landing determine film growth and morphology that affect the physico-chemical properties of the film. Keeping fixed the other process parameters, we discuss the sensitive dependence of film surface nanostructure on Ar pressure and on laser pulse number. The initial plume velocity and average ablated mass per pulse allow predicting the asymptotic NP size. The control of growth parameters favors fine-tuning of NP aggregation, relevant to plasmonics to get optimized substrates for surface enhanced Raman spectroscopy (SERS). Their behavior is discussed for testing conditions of interest for clinical application. Both in aqueous and in biological solutions we obtained good sensitivity and reproducibility of the SERS signals for the anti-Parkinson drug apomorphine, and for the anti-epilepsy drug carbamazepine.

  3. Structural properties of templated Ge quantum dot arrays: impact of growth and pre-pattern parameters.

    PubMed

    Tempeler, J; Danylyuk, S; Brose, S; Loosen, P; Juschkin, L

    2018-07-06

    In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 10 10 cm -2 .

  4. Charge relaxation and dynamics in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2006-08-01

    Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.

  5. Response terminated displays unload selective attention

    PubMed Central

    Roper, Zachary J. J.; Vecera, Shaun P.

    2013-01-01

    Perceptual load theory successfully replaced the early vs. late selection debate by appealing to adaptive control over the efficiency of selective attention. Early selection is observed unless perceptual load (p-Load) is sufficiently low to grant attentional “spill-over” to task-irrelevant stimuli. Many studies exploring load theory have used limited display durations that perhaps impose artificial limits on encoding processes. We extended the exposure duration in a classic p-Load task to alleviate temporal encoding demands that may otherwise tax mnemonic consolidation processes. If the load effect arises from perceptual demands alone, then freeing-up available mnemonic resources by extending the exposure duration should have little effect. The results of Experiment 1 falsify this prediction. We observed a reliable flanker effect under high p-Load, response-terminated displays. Next, we orthogonally manipulated exposure duration and task-relevance. Counter-intuitively, we found that the likelihood of observing the flanker effect under high p-Load resides with the duration of the task-relevant array, not the flanker itself. We propose that stimulus and encoding demands interact to produce the load effect. Our account clarifies how task parameters differentially impinge upon cognitive processes to produce attentional “spill-over” by appealing to visual short-term memory as an additional processing bottleneck when stimuli are briefly presented. PMID:24399983

  6. Response terminated displays unload selective attention.

    PubMed

    Roper, Zachary J J; Vecera, Shaun P

    2013-01-01

    Perceptual load theory successfully replaced the early vs. late selection debate by appealing to adaptive control over the efficiency of selective attention. Early selection is observed unless perceptual load (p-Load) is sufficiently low to grant attentional "spill-over" to task-irrelevant stimuli. Many studies exploring load theory have used limited display durations that perhaps impose artificial limits on encoding processes. We extended the exposure duration in a classic p-Load task to alleviate temporal encoding demands that may otherwise tax mnemonic consolidation processes. If the load effect arises from perceptual demands alone, then freeing-up available mnemonic resources by extending the exposure duration should have little effect. The results of Experiment 1 falsify this prediction. We observed a reliable flanker effect under high p-Load, response-terminated displays. Next, we orthogonally manipulated exposure duration and task-relevance. Counter-intuitively, we found that the likelihood of observing the flanker effect under high p-Load resides with the duration of the task-relevant array, not the flanker itself. We propose that stimulus and encoding demands interact to produce the load effect. Our account clarifies how task parameters differentially impinge upon cognitive processes to produce attentional "spill-over" by appealing to visual short-term memory as an additional processing bottleneck when stimuli are briefly presented.

  7. Setting Mechanical Properties of High Strength Steels for Rapid Hot Forming Processes

    PubMed Central

    Löbbe, Christian; Hering, Oliver; Hiegemann, Lars; Tekkaya, A. Erman

    2016-01-01

    Hot stamping of sheet metal is an established method for the manufacturing of light weight products with tailored properties. However, the generally-applied continuous roller furnace manifests two crucial disadvantages: the overall process time is long and a local setting of mechanical properties is only feasible through special cooling techniques. Hot forming with rapid heating directly before shaping is a new approach, which not only reduces the thermal intervention in the zones of critical formability and requested properties, but also allows the processing of an advantageous microstructure characterized by less grain growth, additional fractions (e.g., retained austenite), and undissolved carbides. Since the austenitization and homogenization process is strongly dependent on the microstructure constitution, the general applicability for the process relevant parameters is unknown. Thus, different austenitization parameters are analyzed for the conventional high strength steels 22MnB5, Docol 1400M, and DP1000 in respect of the mechanical properties. In order to characterize the resulting microstructure, the light optical and scanning electron microscopy, micro and macro hardness measurements, and the X-ray diffraction are conducted subsequent to tensile tests. The investigation proves not only the feasibility to adjust the strength and ductility flexibly, unique microstructures are also observed and the governing mechanisms are clarified. PMID:28773354

  8. Improvement of water treatment pilot plant with Moringa oleifera extract as flocculant agent.

    PubMed

    Beltrán-Heredia, J; Sánchez-Martín, J

    2009-05-01

    Moringa oleifera extract is a high-capacity flocculant agent for turbidity removal in surface water treatment. A complete study of a pilot-plant installation has been carried out. Because of flocculent sedimentability of treated water, a residual turbidity occured in the pilot plant (around 30 NTU), which could not be reduced just by a coagulation-flocculation-sedimentation process. Because of this limitation, the pilot plant (excluded filtration) achieved a turbidity removal up to 70%. A slow sand filter was put in as a complement to installation. A clogging process was characterized, according to Carman-Kozeny's hydraulic hypothesis. Kozeny's k parameter was found to be 4.18. Through fouling stages, this k parameter was found to be up to 6.36. The obtained data are relevant for the design of a real filter in a continuous-feeding pilot plant. Slow sand filtration is highly recommended owing to its low cost, easy-handling and low maintenance, so it is a very good complement to Moringa water treatment in developing countries.

  9. Multifaceted Schwinger effect in de Sitter space

    NASA Astrophysics Data System (ADS)

    Sharma, Ramkishor; Singh, Suprit

    2017-07-01

    We investigate particle production à la the Schwinger mechanism in an expanding, flat de Sitter patch as is relevant for the inflationary epoch of our Universe. Defining states and particle content in curved spacetime is certainly not a unique process. There being different prescriptions on how that can be done, we have used the Schrödinger formalism to define instantaneous particle content of the state, etc. This allows us to go past the adiabatic regime to which the effect has been restricted in the previous studies and bring out its multifaceted nature in different settings. Each of these settings gives rise to contrasting features and behavior as per the effect of the electric field and expansion rate on the instantaneous mean particle number. We also quantify the degree of classicality of the process during its evolution using a "classicality parameter" constructed out of parameters of the Wigner function to obtain information about the quantum to classical transition in this case.

  10. Scaling and Systems Considerations in Pulsed Inductive Thrusters

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.

    2007-01-01

    Performance scaling in pulsed inductive thrusters is discussed in the context of previous experimental studies and modeling results. Two processes, propellant ionization and acceleration, are interconnected where overall thruster performance and operation are concerned, but they are separated here to gain physical insight into each process and arrive at quantitative criteria that should be met to address or mitigate inherent inductive thruster difficulties. The effects of preionization in lowering the discharge energy requirements relative to a case where no preionization is employed, and in influencing the location of the initial current sheet, are described. The relevant performance scaling parameters for the acceleration stage are reviewed, emphasizing their physical importance and the numerical values required for efficient acceleration. The scaling parameters are then related to the design of the pulsed power train providing current to the acceleration stage. The impact of various choices in pulsed power train and circuit topology selection are reviewed, paying special attention to how these choices mitigate or exacerbate switching, lifetime, and power consumption issues.

  11. Bioprocess development workflow: Transferable physiological knowledge instead of technological correlations.

    PubMed

    Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.

  12. Statistical Modeling of Single Target Cell Encapsulation

    PubMed Central

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548

  13. Study of Cold Coiling Spring Steel on Microstructure and Cold Forming Performance

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Liang, Y. L.; Ming, Y.; Zhao, F.

    2017-09-01

    Medium-carbon cold-coiling locomotive spring steels were treated by a novel Q-P-T (quenching-partitioning-tempering) process. Scanning electron microscopy (SEM), transmission electron microscope (TEM) and X-ray diffraction (XRD) were used to characterize the relevant parameters of the steel. Results show that the microstructure of tested steel treated by Q-P-T process is a complex microstructures composed of martensite, bainite and retained austenite. The volume fraction of retained austenite (wt.%) is up to 31%. After pre-deforming and tempering again at 310°C, the plasticity of samples treated by Q-P-T process is still well. Fracture images show that the Q-P-T samples are ductile fracture. It is attributed to the higher volume fraction of the retained austenite and the interactions between the multi-phases in Q-P-T processed sample.

  14. Experimental analysis of Nd-YAG laser cutting of sheet materials - A review

    NASA Astrophysics Data System (ADS)

    Sharma, Amit; Yadava, Vinod

    2018-01-01

    Cutting of sheet material is considered as an important process due to its relevance among products of everyday life such as aircrafts, ships, cars, furniture etc. Among various sheet cutting processes (ASCPs), laser beam cutting is one of the most capable ASCP to create complex geometries with stringent design requirements in difficult-to-cut sheet materials. Based on the recent research work in the area of sheet cutting, it is found that the Nd-YAG laser is used for cutting of sheet material in general and reflective sheet material in particular. This paper reviews the experimental analysis of Nd-YAG laser cutting process, carried out to study the influence of laser cutting parameters on the process performance index. The significance of experimental modeling and different optimization approaches employed by various researchers has also been discussed in this study.

  15. Recycling of Exhaust Batteries in Lead-Foam Electrodes

    NASA Astrophysics Data System (ADS)

    Costanza, Girolamo; Tata, Maria Elisa

    Lead and lead-alloy foams have been investigated in this research. In particular low-cost techniques for the direct production of lead-based electrodes have been analyzed and discussed in this work. The relevance of the main process parameters (powder compacting pressure, granulometry, base metal composition, sintering temperature and time) have been focused and the effect on foam morphology has been discussed too. In particular "Sintering and Dissolution Process" (SDP) and "Replication Process" (RP) have been employed and suitable modified. Both spherical urea and NaCl have been adopted in the SDP method. In the replication process it has been evidenced that the viscosity of the melt is fundamental. Furthermore the research examines lead recovery and recycling of exhaust batteries into foam-based electrodes. A novel method for the direct conversion of Pb scrap into lead foam is discussed too.

  16. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  17. Essential Annotation Schema for Ecology (EASE)-A framework supporting the efficient data annotation and faceted navigation in ecology.

    PubMed

    Pfaff, Claas-Thido; Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines.

  18. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  19. [Transcranial magnetic stimulation (TMS), inhibition processes and attention deficit/hyperactivity disorder (ADHD) - an overview].

    PubMed

    Hoegl, Thomas; Bender, Stephan; Buchmann, Johannes; Kratz, Oliver; Moll, Gunther H; Heinrich, Hartmut

    2014-11-01

    Motor system excitability can be tested by transcranial magnetic stimulation CFMS). In this article, an overview of recent methodological developments and research findings related to attention deficit/hyperactivity disorder (ADHD) is provided. Different TMS parameters that reflect the function of interneurons in the motor cortex may represent neurophysiological markers of inhibition in ADHD, particularly the so-called intracortical inhibition. In children with a high level of hyperactivity and impulsivity, intracortical inhibition was comparably low at rest as shortly before the execution of a movement. TMS-evoked potentials can also be measured in the EEG so that investigating processes of excitability is not restricted to motor areas in future studies. The effects of methylphenidate on motor system excitability may be interpreted in the sense of a 'fine-tuning' with these mainly dopaminergic effects also depending on genetic parameters (DAT1 transporter). A differentiated view on the organization of motor control can be achieved by a combined analysis of TMS parameters and event-related potentials. Applying this bimodal approach, strong evidence for a deviant implementation of motor control in children with ADHD and probably compensatory mechanisms (with involvement of the prefrontal cortex) was obtained. These findings, which contribute to a better understanding of hyperactivity/impulsivity, inhibitory processes and motor control in ADHD as well as the mechanisms of medication, underline the relevance of TMS as a neurophysiological method in ADHD research.

  20. Small Angle Neutron Scattering Observation of Chain Retraction after a Large Step Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.; Heinrich, M.; Pyckhout-Hintzen, W.

    The process of retraction in entangled linear chains after a fast nonlinear stretch was detected from time-resolved but quenched small angle neutron scattering (SANS) experiments on long, well-entangled polyisoprene chains. The statically obtained SANS data cover the relevant time regime for retraction, and they provide a direct, microscopic verification of this nonlinear process as predicted by the tube model. Clear, quantitative agreement is found with recent theories of contour length fluctuations and convective constraint release, using parameters obtained mainly from linear rheology. The theory captures the full range of scattering vectors once the crossover to fluctuations on length scales belowmore » the tube diameter is accounted for.« less

  1. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  2. Exploring JWST's Capability to Constrain Habitability on Simulated Terrestrial TESS Planets

    NASA Astrophysics Data System (ADS)

    Tremblay, Luke; Britt, Amber; Batalha, Natasha; Schwieterman, Edward; Arney, Giada; Domagal-Goldman, Shawn; Mandell, Avi; Planetary Systems Laboratory; Virtual Planetary Laboratory

    2017-01-01

    In the following, we have worked to develop a flexible "observability" scale of biologically relevant molecules in the atmospheres of newly discovered exoplanets for the instruments aboard NASA's next flagship mission, the James Webb Space Telescope (JWST). We sought to create such a scale in order to provide the community with a tool with which to optimize target selection for JWST observations based on detections of the upcoming Transiting Exoplanet Satellite Survey (TESS). Current literature has laid the groundwork for defining both biologically relevant molecules as well as what characteristics would make a new world "habitable", but it has so far lacked a cohesive analysis of JWST's capabilities to observe these molecules in exoplanet atmospheres and thereby constrain habitability. In developing our Observability Scale, we utilized a range of hypothetical planets (over planetary radii and stellar insolation) and generated three self-consistent atmospheric models (of dierent molecular compositions) for each of our simulated planets. With these planets and their corresponding atmospheres, we utilized the most accurate JWST instrument simulator, created specically to process transiting exoplanet spectra. Through careful analysis of these simulated outputs, we were able to determine the relevant parameters that effected JWST's ability to constrain each individual molecular bands with statistical accuracy and therefore generate a scale based on those key parameters. As a preliminary test of our Observability Scale, we have also applied it to the list of TESS candidate stars in order to determine JWST's observational capabilities for any soon-to-be-detected planet in those solar systems.

  3. Applications of dewetting in micro and nanotechnology.

    PubMed

    Gentili, Denis; Foschi, Giulia; Valle, Francesco; Cavallini, Massimiliano; Biscarini, Fabio

    2012-06-21

    Dewetting is a spontaneous phenomenon where a thin film on a surface ruptures into an ensemble of separated objects, like droplets, stripes, and pillars. Spatial correlations with characteristic distance and object size emerge spontaneously across the whole dewetted area, leading to regular motifs with long-range order. Characteristic length scales depend on film thickness, which is a convenient and robust technological parameter. Dewetting is therefore an attractive paradigm for organizing a material into structures of well-defined micro- or nanometre-size, precisely positioned on a surface, thus avoiding lithographical processes. This tutorial review introduces the reader to the physical-chemical basis of dewetting, shows how the dewetting process can be applied to different functional materials with relevance in technological applications, and highlights the possible strategies to control the length scales of the dewetting process.

  4. Results from a first production of enhanced Silicon Sensor Test Structures produced by ITE Warsaw

    NASA Astrophysics Data System (ADS)

    Bergauer, T.; Dragicevic, M.; Frey, M.; Grabiec, P.; Grodner, M.; Hänsel, S.; Hartmann, F.; Hoffmann, K.-H.; Hrubec, J.; Krammer, M.; Kucharski, K.; Macchiolo, A.; Marczewski, J.

    2009-01-01

    Monitoring the manufacturing process of silicon sensors is essential to ensure stable quality of the produced detectors. During the CMS silicon sensor production we were utilising small Test Structures (TS) incorporated on the cut-away of the wafers to measure certain process-relevant parameters. Experience from the CMS production and quality assurance led to enhancements of these TS. Another important application of TS is the commissioning of new vendors. The measurements provide us with a good understanding of the capabilities of a vendor's process. A first batch of the new TS was produced at the Institute of Electron Technology in Warsaw Poland. We will first review the improvements to the original CMS test structures and then discuss a selection of important measurements performed on this first batch.

  5. Mathematical support for automated geometry analysis of lathe machining of oblique peakless round-nose tools

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Tarasov, S. Yu; Podgornyh, O. A.; Shamarin, N. N.; Filippova, E. O.

    2017-01-01

    Automatization of engineering processes requires developing relevant mathematical support and a computer software. Analysis of metal cutting kinematics and tool geometry is a necessary key task at the preproduction stage. This paper is focused on developing a procedure for determining the geometry of oblique peakless round-nose tool lathe machining with the use of vector/matrix transformations. Such an approach allows integration into modern mathematical software packages in distinction to the traditional analytic description. Such an advantage is very promising for developing automated control of the preproduction process. A kinematic criterion for the applicable tool geometry has been developed from the results of this study. The effect of tool blade inclination and curvature on the geometry-dependent process parameters was evaluated.

  6. Application of TRIZ Methodology in Diffusion Welding System Optimization

    NASA Astrophysics Data System (ADS)

    Ravinder Reddy, N.; Satyanarayana, V. V.; Prashanthi, M.; Suguna, N.

    2017-12-01

    Welding is tremendously used in metal joining processes in the manufacturing process. In recent years, diffusion welding method has significantly increased the quality of a weld. Nevertheless, diffusion welding has some extent short research and application progress. Therefore, diffusion welding has a lack of relevant information, concerned with the joining of thick and thin materials with or without interlayers, on welding design such as fixture, parameters selection and integrated design. This article intends to combine innovative methods in the application of diffusion welding design. This will help to decrease trial and error or failure risks in the welding process being guided by the theory of inventive problem solving (TRIZ) design method. This article hopes to provide welding design personnel with innovative design ideas under research and for practical application.

  7. Odd-Parity Superconductivity near an Inversion Breaking Quantum Critical Point in One Dimension

    DOE PAGES

    Ruhman, Jonathan; Kozii, Vladyslav; Fu, Liang

    2017-05-31

    In this work, we study how an inversion-breaking quantum critical point affects the ground state of a one-dimensional electronic liquid with repulsive interaction and spin-orbit coupling. We find that regardless of the interaction strength, the critical fluctuations always lead to a gap in the electronic spin sector. The origin of the gap is a two-particle backscattering process, which becomes relevant due to renormalization of the Luttinger parameter near the critical point. The resulting spin-gapped state is topological and can be considered as a one-dimensional version of a spin-triplet superconductor. Interestingly, in the case of a ferromagnetic critical point, the Luttingermore » parameter is renormalized in the opposite manner, such that the system remains nonsuperconducting.« less

  8. The neural correlates of implicit self-relevant processing in low self-esteem: an ERP study.

    PubMed

    Yang, Juan; Guan, Lili; Dedovic, Katarina; Qi, Mingming; Zhang, Qinglin

    2012-08-30

    Previous neuroimaging studies have shown that implicit and explicit processing of self-relevant (schematic) material elicit activity in many of the same brain regions. Electrophysiological studies on the neural processing of explicit self-relevant cues have generally supported the view that P300 is an index of attention to self-relevant stimuli; however, there has been no study to date investigating the temporal course of implicit self-relevant processing. The current study seeks to investigate the time course involved in implicit self-processing by comparing processing of self-relevant with non-self-relevant words while subjects are making a judgment about color of the words in an implicit attention task. Sixteen low self-esteem participants were examined using event-related potentials technology (ERP). We hypothesized that this implicit attention task would involve P2 component rather than the P300 component. Indeed, P2 component has been associated with perceptual analysis and attentional allocation and may be more likely to occur in unconscious conditions such as this task. Results showed that latency of P2 component, which indexes the time required for perceptual analysis, was more prolonged in processing self-relevant words compared to processing non-self-relevant words. Our results suggested that the judgment of the color of the word interfered with automatic processing of self-relevant information and resulted in less efficient processing of self-relevant word. Together with previous ERP studies examining processing of explicit self-relevant cues, these findings suggest that the explicit and the implicit processing of self-relevant information would not elicit the same ERP components. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Geomatic Methods for the Analysis of Data in the Earth Sciences: Lecture Notes in Earth Sciences, Vol. 95

    NASA Astrophysics Data System (ADS)

    Pavlis, Nikolaos K.

    Geomatics is a trendy term that has been used in recent years to describe academic departments that teach and research theories, methods, algorithms, and practices used in processing and analyzing data related to the Earth and other planets. Naming trends aside, geomatics could be considered as the mathematical and statistical “toolbox” that allows Earth scientists to extract information about physically relevant parameters from the available data and accompany such information with some measure of its reliability. This book is an attempt to present the mathematical-statistical methods used in data analysis within various disciplines—geodesy, geophysics, photogrammetry and remote sensing—from a unifying perspective that inverse problem formalism permits. At the same time, it allows us to stretch the relevance of statistical methods in achieving an optimal solution.

  10. Impact of processing parameters on the haemocompatibility of Bombyx mori silk films.

    PubMed

    Seib, F Philipp; Maitz, Manfred F; Hu, Xiao; Werner, Carsten; Kaplan, David L

    2012-02-01

    Silk has traditionally been used for surgical sutures due to its lasting strength and durability; however, the use of purified silk proteins as a scaffold material for vascular tissue engineering goes beyond traditional use and requires application-orientated biocompatibility testing. For this study, a library of Bombyx mori silk films was generated and exposed to various solvents and treatment conditions to reflect current silk processing techniques. The films, along with clinically relevant reference materials, were exposed to human whole blood to determine silk blood compatibility. All substrates showed an initial inflammatory response comparable to polylactide-co-glycolide (PLGA), and a low to moderate haemostasis response similar to polytetrafluoroethylene (PTFE) substrates. In particular, samples that were water annealed at 25 °C for 6 h demonstrated the best blood compatibility based on haemostasis parameters (e.g. platelet decay, thrombin-antithrombin complex, platelet factor 4, granulocytes-platelet conjugates) and inflammatory parameters (e.g. C3b, C5a, CD11b, surface-associated leukocytes). Multiple factors such as treatment temperature and solvent influenced the biological response, though no single physical parameter such as β-sheet content, isoelectric point or contact angle accurately predicted blood compatibility. These findings, when combined with prior in vivo data on silk, support a viable future for silk-based vascular grafts. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A new concept of a unified parameter management, experiment control, and data analysis in fMRI: application to real-time fMRI at 3T and 7T.

    PubMed

    Hollmann, M; Mönch, T; Mulla-Osman, S; Tempelmann, C; Stadler, J; Bernarding, J

    2008-10-30

    In functional MRI (fMRI) complex experiments and applications require increasingly complex parameter handling as the experimental setup usually consists of separated soft- and hardware systems. Advanced real-time applications such as neurofeedback-based training or brain computer interfaces (BCIs) may even require adaptive changes of the paradigms and experimental setup during the measurement. This would be facilitated by an automated management of the overall workflow and a control of the communication between all experimental components. We realized a concept based on an XML software framework called Experiment Description Language (EDL). All parameters relevant for real-time data acquisition, real-time fMRI (rtfMRI) statistical data analysis, stimulus presentation, and activation processing are stored in one central EDL file, and processed during the experiment. A usability study comparing the central EDL parameter management with traditional approaches showed an improvement of the complete experimental handling. Based on this concept, a feasibility study realizing a dynamic rtfMRI-based brain computer interface showed that the developed system in combination with EDL was able to reliably detect and evaluate activation patterns in real-time. The implementation of a centrally controlled communication between the subsystems involved in the rtfMRI experiments reduced potential inconsistencies, and will open new applications for adaptive BCIs.

  12. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  13. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  14. Vehicle response-based track geometry assessment using multi-body simulation

    NASA Astrophysics Data System (ADS)

    Kraft, Sönke; Causse, Julien; Coudert, Frédéric

    2018-02-01

    The assessment of the geometry of railway tracks is an indispensable requirement for safe rail traffic. Defects which represent a risk for the safety of the train have to be identified and the necessary measures taken. According to current standards, amplitude thresholds are applied to the track geometry parameters measured by recording cars. This geometry-based assessment has proved its value but suffers from the low correlation between the geometry parameters and the vehicle reactions. Experience shows that some defects leading to critical vehicle reactions are underestimated by this approach. The use of vehicle responses in the track geometry assessment process allows identifying critical defects and improving the maintenance operations. This work presents a vehicle response-based assessment method using multi-body simulation. The choice of the relevant operation conditions and the estimation of the simulation uncertainty are outlined. The defects are identified from exceedances of track geometry and vehicle response parameters. They are then classified using clustering methods and the correlation with vehicle response is analysed. The use of vehicle responses allows the detection of critical defects which are not identified from geometry parameters.

  15. Systems modelling methodology for the analysis of apoptosis signal transduction and cell death decisions.

    PubMed

    Rehm, Markus; Prehn, Jochen H M

    2013-06-01

    Systems biology and systems medicine, i.e. the application of systems biology in a clinical context, is becoming of increasing importance in biology, drug discovery and health care. Systems biology incorporates knowledge and methods that are applied in mathematics, physics and engineering, but may not be part of classical training in biology. We here provide an introduction to basic concepts and methods relevant to the construction and application of systems models for apoptosis research. We present the key methods relevant to the representation of biochemical processes in signal transduction models, with a particular reference to apoptotic processes. We demonstrate how such models enable a quantitative and temporal analysis of changes in molecular entities in response to an apoptosis-inducing stimulus, and provide information on cell survival and cell death decisions. We introduce methods for analyzing the spatial propagation of cell death signals, and discuss the concepts of sensitivity analyses that enable a prediction of network responses to disturbances of single or multiple parameters. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Experimental constraints from flavour changing processes and physics beyond the Standard Model.

    PubMed

    Gersabeck, M; Gligorov, V V; Serra, N

    Flavour physics has a long tradition of paving the way for direct discoveries of new particles and interactions. Results over the last decade have placed stringent bounds on the parameter space of physics beyond the Standard Model. Early results from the LHC, and its dedicated flavour factory LHCb, have further tightened these constraints and reiterate the ongoing relevance of flavour studies. The experimental status of flavour observables in the charm and beauty sectors is reviewed in measurements of CP violation, neutral meson mixing, and measurements of rare decays.

  17. Fused Deposition Technique for Continuous Fiber Reinforced Thermoplastic

    NASA Astrophysics Data System (ADS)

    Bettini, Paolo; Alitta, Gianluca; Sala, Giuseppe; Di Landro, Luca

    2017-02-01

    A simple technique for the production of continuous fiber reinforced thermoplastic by fused deposition modeling, which involves a common 3D printer with quite limited modifications, is presented. An adequate setting of processing parameters and deposition path allows to obtain components with well-enhanced mechanical characteristics compared to conventional 3D printed items. The most relevant problems related to the simultaneous feeding of fibers and polymer are discussed. The properties of obtained aramid fiber reinforced polylactic acid (PLA) in terms of impregnation quality and of mechanical response are measured.

  18. Keyword extraction by nonextensivity measure.

    PubMed

    Mehri, Ali; Darooneh, Amir H

    2011-05-01

    The presence of a long-range correlation in the spatial distribution of a relevant word type, in spite of random occurrences of an irrelevant word type, is an important feature of human-written texts. We classify the correlation between the occurrences of words by nonextensive statistical mechanics for the word-ranking process. In particular, we look at the nonextensivity parameter as an alternative metric to measure the spatial correlation in the text, from which the words may be ranked in terms of this measure. Finally, we compare different methods for keyword extraction. © 2011 American Physical Society

  19. Physical Processes Controlling Earth's Climate

    NASA Technical Reports Server (NTRS)

    Genio, Anthony Del

    2013-01-01

    As background for consideration of the climates of the other terrestrial planets in our solar system and the potential habitability of rocky exoplanets, we discuss the basic physics that controls the Earths present climate, with particular emphasis on the energy and water cycles. We define several dimensionless parameters relevant to characterizing a planets general circulation, climate and hydrological cycle. We also consider issues associated with the use of past climate variations as indicators of future anthropogenically forced climate change, and recent advances in understanding projections of future climate that might have implications for Earth-like exoplanets.

  20. The spatial-temporal evolution law of microseismic activities in the failure process of deep rock masses

    NASA Astrophysics Data System (ADS)

    Yuan-hui, Li; Gang, Lei; Shi-da, Xu; Da-wei, Wu

    2018-07-01

    Under high stress and blasting disturbance, the failure of deep rock masses is a complex, dynamic evolutionary process. To reveal the relation between macroscopic failure of deep rock masses and spatial-temporal evolution law of micro-cracking within, the initiation, extension, and connection of micro-cracks under blasting disturbance and the deformation and failure mechanism of deep rock masses were studied. The investigation was carried out using the microseismic (MS) monitoring system established in the deep mining area of Ashele Copper Mine (Xinjiang Uygur Autonomous Region, China). The results showed that the failure of the deep rock masses is a dynamic process accompanied with stress release and stress adjustment. It is not only related to the blasting-based mining, but also associated with zones of stress concentration formed due to the mining. In that space, the concentrated area in the cloud chart for the distribution of MS event density before failure of the rocks shows the basically same pattern with the damaged rocks obtained through scanning of mined-out areas, which indicates that the cloud chart can be used to determine potential risk areas of rocks in the spatial domain. In the time domain, relevant parameters of MS events presented different changes before the failure of the rocks: the energy index decreased while the cumulative apparent volume gradually increased, the magnitude distribution of microseismic events decreased rapidly, and the fractal dimension decreased at first and then remained stable. This demonstrates that the different changes in relevant MS parameters allow researchers to predict the failure time of the rocks. By analysing the dynamic evolution process of the failure of the deep rock masses, areas at potential risk can be predicted spatially and temporally. The result provides guidance for those involved in the safe production and management of underground engineering and establishes a theoretical basis for the study on the stability of deep rock masses.

  1. The Certainty of Uncertainty: Potential Sources of Bias and Imprecision in Disease Ecology Studies.

    PubMed

    Lachish, Shelly; Murray, Kris A

    2018-01-01

    Wildlife diseases have important implications for wildlife and human health, the preservation of biodiversity and the resilience of ecosystems. However, understanding disease dynamics and the impacts of pathogens in wild populations is challenging because these complex systems can rarely, if ever, be observed without error. Uncertainty in disease ecology studies is commonly defined in terms of either heterogeneity in detectability (due to variation in the probability of encountering, capturing, or detecting individuals in their natural habitat) or uncertainty in disease state assignment (due to misclassification errors or incomplete information). In reality, however, uncertainty in disease ecology studies extends beyond these components of observation error and can arise from multiple varied processes, each of which can lead to bias and a lack of precision in parameter estimates. Here, we present an inventory of the sources of potential uncertainty in studies that attempt to quantify disease-relevant parameters from wild populations (e.g., prevalence, incidence, transmission rates, force of infection, risk of infection, persistence times, and disease-induced impacts). We show that uncertainty can arise via processes pertaining to aspects of the disease system, the study design, the methods used to study the system, and the state of knowledge of the system, and that uncertainties generated via one process can propagate through to others because of interactions between the numerous biological, methodological and environmental factors at play. We show that many of these sources of uncertainty may not be immediately apparent to researchers (for example, unidentified crypticity among vectors, hosts or pathogens, a mismatch between the temporal scale of sampling and disease dynamics, demographic or social misclassification), and thus have received comparatively little consideration in the literature to date. Finally, we discuss the type of bias or imprecision introduced by these varied sources of uncertainty and briefly present appropriate sampling and analytical methods to account for, or minimise, their influence on estimates of disease-relevant parameters. This review should assist researchers and practitioners to navigate the pitfalls of uncertainty in wildlife disease ecology studies.

  2. Relating ground truth collection to model sensitivity

    NASA Technical Reports Server (NTRS)

    Amar, Faouzi; Fung, Adrian K.; Karam, Mostafa A.; Mougin, Eric

    1993-01-01

    The importance of collecting high quality ground truth before a SAR mission over a forested area is two fold. First, the ground truth is used in the analysis and interpretation of the measured backscattering properties; second, it helps to justify the use of a scattering model to fit the measurements. Unfortunately, ground truth is often collected based on visual assessment of what is perceived to be important without regard to the mission itself. Sites are selected based on brief surveys of large areas, and the ground truth is collected by a process of selecting and grouping different scatterers. After the fact, it may turn out that some of the relevant parameters are missing. A three-layer canopy model based on the radiative transfer equations is used to determine, before hand, the relevant parameters to be collected. Detailed analysis of the contribution to scattering and attenuation of various forest components is carried out. The goal is to identify the forest parameters which most influence the backscattering as a function of frequency (P-, L-, and C-bands) and incident angle. The influence on backscattering and attenuation of branch diameters, lengths, angular distribution, and permittivity; trunk diameters, lengths, and permittivity; and needle sizes, their angular distribution, and permittivity are studied in order to maximize the efficiency of the ground truth collection efforts. Preliminary results indicate that while a scatterer may not contribute to the total backscattering, its contribution to attenuation may be significant depending on the frequency.

  3. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  4. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  5. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  6. Dark forces in the sky: signals from Z{sup ′} and the dark Higgs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Nicole F.; Cai, Yi; Leane, Rebecca K.

    2016-08-01

    We consider the indirect detection signals for a self-consistent hidden U(1) model containing a Majorana dark matter candidate, χ, a dark gauge boson, Z{sup ′}, and a dark Higgs, s. Compared with a model containing only a dark matter candidate and Z{sup ′} mediator, the addition of the scalar provides a mass generation mechanism for the dark sector particles and is required in order to avoid unitarity violation at high energies. We find that the inclusion of the two mediators opens up a new two-body s-wave annihilation channel, χχ→sZ{sup ′}. This new process, which is missed in the usual single-mediatormore » simplified model approach, can be the dominant annihilation channel. This provides rich phenomenology for indirect detection searches, allows indirect searches to explore regions of parameter space not accessible with other commonly considered s-wave annihilation processes, and enables both the Z{sup ′} and scalar couplings to be probed. We examine the phenomenology of the sector with a focus on this new process, and determine the limits on the model parameter space from Fermi data on dwarf spheriodal galaxies and other relevant experiments.« less

  7. A computer model for liquid jet atomization in rocket thrust chambers

    NASA Astrophysics Data System (ADS)

    Giridharan, M. G.; Lee, J. G.; Krishnan, A.; Yang, H. Q.; Ibrahim, E.; Chuech, S.; Przekwas, A. J.

    1991-12-01

    The process of atomization has been used as an efficient means of burning liquid fuels in rocket engines, gas turbine engines, internal combustion engines, and industrial furnaces. Despite its widespread application, this complex hydrodynamic phenomenon has not been well understood, and predictive models for this process are still in their infancy. The difficulty in simulating the atomization process arises from the relatively large number of parameters that influence it, including the details of the injector geometry, liquid and gas turbulence, and the operating conditions. In this study, numerical models are developed from first principles, to quantify factors influencing atomization. For example, the surface wave dynamics theory is used for modeling the primary atomization and the droplet energy conservation principle is applied for modeling the secondary atomization. The use of empirical correlations has been minimized by shifting the analyses to fundamental levels. During applications of these models, parametric studies are performed to understand and correlate the influence of relevant parameters on the atomization process. The predictions of these models are compared with existing experimental data. The main tasks of this study were the following: development of a primary atomization model; development of a secondary atomization model; development of a model for impinging jets; development of a model for swirling jets; and coupling of the primary atomization model with a CFD code.

  8. Determination of Parameters for the Supercritical Extraction of Antioxidant Compounds from Green Propolis Using Carbon Dioxide and Ethanol as Co-Solvent.

    PubMed

    Machado, Bruna Aparecida Souza; Barreto, Gabriele de Abreu; Costa, Aline Silva; Costa, Samantha Serra; Silva, Rejane Pina Dantas; da Silva, Danielle Figuerêdo; Brandão, Hugo Neves; da Rocha, José Luiz Carneiro; Nunes, Silmar Baptista; Umsza-Guez, Marcelo Andres; Padilha, Francine Ferreira

    2015-01-01

    The aim of this study was to determine the best processing conditions to extract Brazilian green propolis using a supercritical extraction technology. For this purpose, the influence of different parameters was evaluated such as S/F (solvent mass in relation to solute mass), percentage of co-solvent (1 and 2% ethanol), temperature (40 and 50°C) and pressure (250, 350 and 400 bar) using supercritical carbon dioxide. The Global Yield Isotherms (GYIs) were obtained through the evaluation of the yield, and the chemical composition of the extracts was also obtained in relation to the total phenolic compounds, flavonoids, antioxidant activity and 3,5-diprenyl-4-hydroxicinnamic acid (Artepillin C) and acid 4-hydroxycinnamic (p-coumaric acid). The best results were identified at 50°C, 350 bar, 1% ethanol (co-solvent) and S/F of 110. These conditions, a content of 8.93±0.01 and 0.40±0.05 g/100 g of Artepillin C and p-coumaric acid, respectively, were identified indicating the efficiency of the extraction process. Despite of low yield of the process, the extracts obtained had high contents of relevant compounds, proving the viability of the process to obtain green propolis extracts with important biological applications due to the extracts composition.

  9. Mimicking bug-like surface structures and their fluid transport produced by ultrashort laser pulse irradiation of steel

    NASA Astrophysics Data System (ADS)

    Kirner, S. V.; Hermens, U.; Mimidis, A.; Skoulas, E.; Florian, C.; Hischen, F.; Plamadeala, C.; Baumgartner, W.; Winands, K.; Mescheder, H.; Krüger, J.; Solis, J.; Siegel, J.; Stratakis, E.; Bonse, J.

    2017-12-01

    Ultrashort laser pulses with durations in the fs-to-ps range were used for large area surface processing of steel aimed at mimicking the morphology and extraordinary wetting behaviour of bark bugs (Aradidae) found in nature. The processing was performed by scanning the laser beam over the surface of polished flat sample surfaces. A systematic variation of the laser processing parameters (peak fluence and effective number of pulses per spot diameter) allowed the identification of different regimes associated with characteristic surface morphologies (laser-induced periodic surface structures, i.e., LIPSS, grooves, spikes, etc.). Moreover, different laser processing strategies, varying laser wavelength, pulse duration, angle of incidence, irradiation atmosphere, and repetition rates, allowed to achieve a range of morphologies that resemble specific structures found on bark bugs. For identifying the ideal combination of parameters for mimicking bug-like structures, the surfaces were inspected by scanning electron microscopy. In particular, tilted micrometre-sized spikes are the best match for the structure found on bark bugs. Complementary to the morphology study, the wetting behaviour of the surface structures for water and oil was examined in terms of philic/phobic nature and fluid transport. These results point out a route towards reproducing complex surface structures inspired by nature and their functional response in technologically relevant materials.

  10. Predicting non-stationary algal dynamics following changes in hydrometeorological conditions using data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Kim, S.; Seo, D. J.

    2017-12-01

    When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.

  11. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  12. Computational analysis of non-Newtonian boundary layer flow of nanofluid past a semi-infinite vertical plate with partial slip

    NASA Astrophysics Data System (ADS)

    Amanulla, C. H.; Nagendra, N.; Suryanarayana Reddy, M.

    2018-03-01

    An analysis of this paper is examined, two-dimensional, laminar with heat and mass transfer of natural convective nanofluid flow past a semi-infinite vertical plate surface with velocity and thermal slip effects are studied theoretically. The coupled governing partial differential equations are transformed to ordinary differential equations by using non-similarity transformations. The obtained ordinary differential equations are solved numerically by a well-known method named as Keller Box Method (KBM). The influences of the emerging parameters i.e. Casson fluid parameter (β), Brownian motion parameter (Nb), thermophoresis parameter (Nt), Buoyancy ratio parameter (N), Lewis number (Le), Prandtl number (Pr), Velocity slip factor (Sf) and Thermal slip factor (ST) on velocity, temperature and nano-particle concentration distributions is illustrated graphically and interpreted at length. The major sources of nanoparticle migration in Nanofluids are Thermophoresis and Brownian motion. A suitable agreement with existing published literature is made and an excellent agreement is observed for the limiting case and also validation of solutions with a Nakamura tridiagonal method has been included. It is observed that nanoparticle concentrations on surface decreases with an increase in slip parameter. The study is relevant to enrobing processes for electric-conductive nano-materials, of potential use in aerospace and other industries.

  13. Translational Rodent Paradigms to Investigate Neuromechanisms Underlying Behaviors Relevant to Amotivation and Altered Reward Processing in Schizophrenia.

    PubMed

    Young, Jared W; Markou, Athina

    2015-09-01

    Amotivation and reward-processing deficits have long been described in patients with schizophrenia and considered large contributors to patients' inability to integrate well in society. No effective treatments exist for these symptoms, partly because the neuromechanisms mediating such symptoms are poorly understood. Here, we propose a translational neuroscientific approach that can be used to assess reward/motivational deficits related to the negative symptoms of schizophrenia using behavioral paradigms that can also be conducted in experimental animals. By designing and using objective laboratory behavioral tools that are parallel in their parameters in rodents and humans, the neuromechanisms underlying behaviors with relevance to these symptoms of schizophrenia can be investigated. We describe tasks that measure the motivation of rodents to expend physical and cognitive effort to gain rewards, as well as probabilistic learning tasks that assess both reward learning and feedback-based decision making. The latter tasks are relevant because of demonstrated links of performance deficits correlating with negative symptoms in patients with schizophrenia. These tasks utilize operant techniques in order to investigate neural circuits targeting a specific domain across species. These tasks therefore enable the development of insights into altered mechanisms leading to negative symptom-relevant behaviors in patients with schizophrenia. Such findings will then enable the development of targeted treatments for these altered neuromechanisms and behaviors seen in schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Processing methods, characteristics and adsorption behavior of tire derived carbons: a review.

    PubMed

    Saleh, Tawfik A; Gupta, Vinod Kumar

    2014-09-01

    The remarkable increase in the number of vehicles worldwide; and the lack of both technical and economical mechanisms of disposal make waste tires to be a serious source of pollution. One potential recycling process is pyrolysis followed by chemical activation process to produce porous activated carbons. Many researchers have recently proved the capability of such carbons as adsorbents to remove various types of pollutants including organic and inorganic species. This review attempts to compile relevant knowledge about the production methods of carbon from waste rubber tires. The effects of various process parameters including temperature and heating rate, on the pyrolysis stage; activation temperature and time, activation agent and activating gas are reviewed. This review highlights the use of waste-tires derived carbon to remove various types of pollutants like heavy metals, dye, pesticides and others from aqueous media. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Stress-driven buckling patterns in spheroidal core/shell structures.

    PubMed

    Yin, Jie; Cao, Zexian; Li, Chaorong; Sheinman, Izhak; Chen, Xi

    2008-12-09

    Many natural fruits and vegetables adopt an approximately spheroidal shape and are characterized by their distinct undulating topologies. We demonstrate that various global pattern features can be reproduced by anisotropic stress-driven buckles on spheroidal core/shell systems, which implies that the relevant mechanical forces might provide a template underpinning the topological conformation in some fruits and plants. Three dimensionless parameters, the ratio of effective size/thickness, the ratio of equatorial/polar radii, and the ratio of core/shell moduli, primarily govern the initiation and formation of the patterns. A distinct morphological feature occurs only when these parameters fall within certain ranges: In a prolate spheroid, reticular buckles take over longitudinal ridged patterns when one or more parameters become large. Our results demonstrate that some universal features of fruit/vegetable patterns (e.g., those observed in Korean melons, silk gourds, ribbed pumpkins, striped cavern tomatoes, and cantaloupes, etc.) may be related to the spontaneous buckling from mechanical perspectives, although the more complex biological or biochemical processes are involved at deep levels.

  16. Coherence properties of the 0-π qubit

    NASA Astrophysics Data System (ADS)

    Groszkowski, Peter; Di Paolo, A.; Grimsmo, A. L.; Blais, A.; Schuster, D. I.; Houck, A. A.; Koch, Jens

    2018-04-01

    Superconducting circuits rank among some of the most interesting architectures for the implementation of quantum information processing devices. The recently proposed 0-π qubit (Brooks et al 2013 Phys. Rev. A 87 52306) promises increased protection from spontaneous relaxation and dephasing. In this paper we present a detailed theoretical study of the coherence properties of the 0-π device, investigate relevant decoherence channels, and show estimates for achievable coherence times in multiple parameter regimes. In our analysis, we include disorder in circuit parameters, which results in the coupling of the qubit to a low-energy, spurious harmonic mode. We analyze the effects of such coupling on decoherence, in particular dephasing due to photon shot noise, and outline how such a noise channel can be mitigated by appropriate parameter choices. In the end we find that the 0-π qubit performs well and may become an attractive candidate for the implementation of the next-generation superconducting devices for uses in quantum computing and information.

  17. Kepler Uniform Modeling of KOIs: MCMC Notes for Data Release 25

    NASA Technical Reports Server (NTRS)

    Hoffman, Kelsey L.; Rowe, Jason F.

    2017-01-01

    This document describes data products related to the reported planetary parameters and uncertainties for the Kepler Objects of Interest (KOIs) based on a Markov-Chain-Monte-Carlo (MCMC) analysis. Reported parameters, uncertainties and data products can be found at the NASA Exoplanet Archive . The codes used for this data analysis are available on the Github website (Rowe 2016). The relevant paper for details of the calculations is Rowe et al. (2015). The main differences between the model fits discussed here and those in the DR24 catalogue are that the DR25 light curves were used in the analysis, our processing of the MAST light curves took into account different data flags, the number of chains calculated was doubled to 200 000, and the parameters which are reported are based on a damped least-squares fit, instead of the median value from the Markov chain or the chain with the lowest 2 as reported in the past.

  18. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  19. Limits of detection and decision. Part 3

    NASA Astrophysics Data System (ADS)

    Voigtman, E.

    2008-02-01

    It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.

  20. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. Advancing representation of hydrologic processes in the Soil and Water Assessment Tool (SWAT) through integration of the TOPographic MODEL (TOPMODEL) features

    USGS Publications Warehouse

    Chen, J.; Wu, Y.

    2012-01-01

    This paper presents a study of the integration of the Soil and Water Assessment Tool (SWAT) model and the TOPographic MODEL (TOPMODEL) features for enhancing the physical representation of hydrologic processes. In SWAT, four hydrologic processes, which are surface runoff, baseflow, groundwater re-evaporation and deep aquifer percolation, are modeled by using a group of empirical equations. The empirical equations usually constrain the simulation capability of relevant processes. To replace these equations and to model the influences of topography and water table variation on streamflow generation, the TOPMODEL features are integrated into SWAT, and a new model, the so-called SWAT-TOP, is developed. In the new model, the process of deep aquifer percolation is removed, the concept of groundwater re-evaporation is refined, and the processes of surface runoff and baseflow are remodeled. Consequently, three parameters in SWAT are discarded, and two new parameters to reflect the TOPMODEL features are introduced. SWAT-TOP and SWAT are applied to the East River basin in South China, and the results reveal that, compared with SWAT, the new model can provide a more reasonable simulation of the hydrologic processes of surface runoff, groundwater re-evaporation, and baseflow. This study evidences that an established hydrologic model can be further improved by integrating the features of another model, which is a possible way to enhance our understanding of the workings of catchments.

  2. Progress in Operational Analysis of Launch Vehicles in Nonstationary Flight

    NASA Technical Reports Server (NTRS)

    James, George; Kaouk, Mo; Cao, Timothy

    2013-01-01

    This paper presents recent results in an ongoing effort to understand and develop techniques to process launch vehicle data, which is extremely challenging for modal parameter identification. The primary source of difficulty is due to the nonstationary nature of the situation. The system is changing, the environment is not steady, and there is an active control system operating. Hence, the primary tool for producing clean operational results (significant data lengths and data averaging) is not available to the user. This work reported herein uses a correlation-based two step operational modal analysis approach to process the relevant data sets for understanding and development of processes. A significant drawback for such processing of short time histories is a series of beating phenomena due to the inability to average out random modal excitations. A recursive correlation process coupled to a new convergence metric (designed to mitigate the beating phenomena) is the object of this study. It has been found in limited studies that this process creates clean modal frequency estimates but numerically alters the damping.

  3. Vapor Hydrogen Peroxide Sterilization Certification

    NASA Astrophysics Data System (ADS)

    Chen, Fei; Chung, Shirley; Barengoltz, Jack

    For interplanetary missions landing on a planet of potential biological interest, United States NASA planetary protection currently requires that the flight system must be assembled, tested and ultimately launched with the intent of minimizing the bioload taken to and deposited on the planet. Currently the only NASA approved microbial reduction method is dry heat sterilization process. However, with utilization of such elements as highly sophisticated electronics and sensors in modern spacecraft, this process presents significant materials challenges and is thus an undesirable bioburden reduction method to design engineers. The objective of this work is to introduce vapor hydrogen peroxide (VHP) as an alternative to dry heat microbial reduction to meet planetary protection requirements. The VHP sterilization technology is widely used by the medical industry, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal of our study is determine the minimum VHP process conditions for PP acceptable microbial reduction levels. A series of experiments were conducted using Geobacillus stearothermophilus to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters -hydrogen peroxide concentration, number of pulses, and exposure duration -the investigation also considered the possible effect of environmental pa-rameters. Temperature, relative humidity, and material substrate effects on lethality were also studied. Based on the results, a most conservative D value was recommended. This recom-mended D value was also validated using VHP "hardy" strains that were isolated from clean-rooms and environmental populations collected from spacecraft relevant areas. The efficiency of VHP at ambient condition as well as VHP material compatibility will also be presented.

  4. Prediction of porosity of food materials during drying: Current challenges and directions.

    PubMed

    Joardder, Mohammad U H; Kumar, C; Karim, M A

    2017-07-18

    Pore formation in food samples is a common physical phenomenon observed during dehydration processes. The pore evolution during drying significantly affects the physical properties and quality of dried foods. Therefore, it should be taken into consideration when predicting transport processes in the drying sample. Characteristics of pore formation depend on the drying process parameters, product properties and processing time. Understanding the physics of pore formation and evolution during drying will assist in accurately predicting the drying kinetics and quality of food materials. Researchers have been trying to develop mathematical models to describe the pore formation and evolution during drying. In this study, existing porosity models are critically analysed and limitations are identified. Better insight into the factors affecting porosity is provided, and suggestions are proposed to overcome the limitations. These include considerations of process parameters such as glass transition temperature, sample temperature, and variable material properties in the porosity models. Several researchers have proposed models for porosity prediction of food materials during drying. However, these models are either very simplistic or empirical in nature and failed to consider relevant significant factors that influence porosity. In-depth understanding of characteristics of the pore is required for developing a generic model of porosity. A micro-level analysis of pore formation is presented for better understanding, which will help in developing an accurate and generic porosity model.

  5. Measuring dynamic kidney function in an undergraduate physiology laboratory.

    PubMed

    Medler, Scott; Harrington, Frederick

    2013-12-01

    Most undergraduate physiology laboratories are very limited in how they treat renal physiology. It is common to find teaching laboratories equipped with the capability for high-resolution digital recordings of physiological functions (muscle twitches, ECG, action potentials, respiratory responses, etc.), but most urinary laboratories still rely on a "dipstick" approach of urinalysis. Although this technique can provide some basic insights into the functioning of the kidneys, it overlooks the dynamic processes of filtration, reabsorption, and secretion. In the present article, we provide a straightforward approach of using renal clearance measurements to estimate glomerular filtration rate, fractional water reabsorption, glucose clearance, and other physiologically relevant parameters. The estimated values from our measurements in laboratory are in close agreement with those anticipated based on textbook parameters. For example, we found glomerular filtration rate to average 124 ± 45 ml/min, serum creatinine to be 1.23 ± 0.4 mg/dl, and fractional water reabsorption to be ∼96.8%. Furthermore, analyses for the class data revealed significant correlations between parameters like fractional water reabsorption and urine concentration, providing opportunities to discuss urine concentrating mechanisms and other physiological processes. The procedures outlined here are general enough that most undergraduate physiology laboratory courses should be able to implement them without difficulty.

  6. Disentangling the adult attention-deficit hyperactivity disorder endophenotype: parametric measurement of attention.

    PubMed

    Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2011-11-01

    Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.

  7. Sensitivity of boundary layer variables to PBL schemes over the central Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Xu, L.; Liu, H.; Wang, L.; Du, Q.; Liu, Y.

    2017-12-01

    Planetary Boundary Layer (PBL) parameterization schemes play critical role in numerical weather prediction and research. They describe physical processes associated with the momentum, heat and humidity exchange between land surface and atmosphere. In this study, two non-local (YSU and ACM2) and two local (MYJ and BouLac) planetary boundary layer parameterization schemes in the Weather Research and Forecasting (WRF) model have been tested over the central Tibetan Plateau regarding of their capability to model boundary layer parameters relevant for surface energy exchange. The model performance has been evaluated against measurements from the Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III). Simulated meteorological parameters and turbulence fluxes have been compared with observations through standard statistical measures. Model results show acceptable behavior, but no particular scheme produces best performance for all locations and parameters. All PBL schemes underestimate near surface air temperatures over the Tibetan Plateau. By investigating the surface energy budget components, the results suggest that downward longwave radiation and sensible heat flux are the main factors causing the lower near surface temperature. Because the downward longwave radiation and sensible heat flux are respectively affected by atmosphere moisture and land-atmosphere coupling, improvements in water vapor distribution and land-atmosphere energy exchange is meaningful for better presentation of PBL physical processes over the central Tibetan Plateau.

  8. Drama advertisements: moderating effects of self-relevance on the relations among empathy, information processing, and attitudes.

    PubMed

    Chebat, Jean-Charles; Vercollier, Sarah Drissi; Gélinas-Chebat, Claire

    2003-06-01

    The effects of drama versus lecture format in public service advertisements are studied in a 2 (format) x 2 (malaria vs AIDS) factorial design. Two structural equation models are built (one for each level of self-relevance), showing two distinct patterns. In both low and high self-relevant situations, empathy plays a key role. Under low self-relevance conditions, drama enhances information processing through empathy. Under high self-relevant conditions, the advertisement format has neither significant cognitive or empathetic effects. The information processing generated by the highly relevant topic affects viewers' empathy, which in turn affects the attitude the advertisement and the behavioral intent. As predicted by the Elaboration Likelihood Model, the advertisement format enhances the attitudes and information processing mostly under low self-relevant conditions. Under low self-relevant conditions, empathy enhances information processing while under high self-relevance, the converse relation holds.

  9. Experimental Aspects in Beam Characterization

    NASA Astrophysics Data System (ADS)

    Sona, Alberto

    2004-08-01

    Beam characterization is the pre-requisite of any research exploiting light beams, especially in cases involving laser beams. One can rely on the beam parameters provided by the manufacturer but often they are inadequate and/or not sufficient for the experimental data analysis. The full characterization of a laser beam can require the determination of many parameters (about ten for a generic beam); however for symmetrical beams the significant ones can reduce to only to a few. The characterization can be performed with the accuracy requested by the application and limited to the relevant parameters. The main parameters of interest will be defined and the measurement procedures and equipment will be discussed. The ISO standards consider the following parameters mainly of interest for industrial applications: 1) Beam widths, divergence angle and beam propagation ratio. 2) Power, energy density distribution 3) Parameters for stigmatic and simple astigmatic beams 4) Parameters for general astigmatic beams 5) Geometrical laser beams classification and propagation 6) Power, energy and temporal characteristics 7) Beam positional stability 8) Beam polarization 9) Spectral characteristics 10) Shape of a laser wavefront: Phase distribution All the above points will be briefly discussed as regards the experimental problems involved. Special attention will be given to the methods for measuring the intensity distribution and to the related instrumentation to derive the Beam propagation ratio, the Beam Quality factor M2 or the Beam Parameters Product. Examples of the parameters relevance for specific applications will be given. Depending on the spectral range, specific detectors are used: CCD cameras with detector arrays in the visible and near infrared, thermocameras with a single detector and scanning system for the medium and far IR. The major problems in data collection and processing will be discussed. Another new and not yet fully investigated area is the characterization of laser beam by wavefront measuring instruments. One possible approach is the use of self-referencing interferometers such as the point diffraction interferometers. Alternatively wavefront gradient measuring instruments can be used such as the Hartmann-Shack sensors. Wavefront intensity and phase joint distributions can now be measured at the same time. This can provide in addition new methods to derive the modal content. A short review of the experimental problems in this area still looking for a practical solution will be given. Note from Publisher: This article contains the abstract and references only.

  10. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  11. Hydrological response in catchments whit debris covered glaciers in the semi-arid Andes, Chile

    NASA Astrophysics Data System (ADS)

    Caro, A.; McPhee, J.; MacDonell, S.; Pellicciotti, F.; Ayala, A.

    2016-12-01

    Glaciers in the semi-arid Andes Cordillera in Chile have shrank rapidly during the 20th century. Negative mass balance contributes to increase the surface area of debris-covered glaciers. Recent research in Chile suggests that contributions from glaciers to summer season river flow in dry years is very important, however hydrological processes determining the glacier contribution are still poorly understood in the region. This work seeks to determine appropriate parameters for the simulation of melt volume in two watersheds dominated by debris-covered glaciers, in order to understand its variability in time and space, in the area with the largest population in Chile. The hydrological simulation is performed for the Tapado (30°S) and Pirámide (33ºS) glaciers, which can be defined as cold and temperate respectively. To simulate the hydrological behaviour we adopt the physically-based TOPographic Kinematic wave APproximation model (TOPKAPI-ETH). The hydrometeorological records necessary model runs have been collected through fieldwork from 2013 to 2015. Regarding the calibration of the model parameters melting ETI, its observed that the value for TF in Pirámide is a third of the value for Tapado glacier, while SRF is half in Tapado regarding to Pirámide. The runoff in the glaciers, the constant snow and ice storage are higher in Tapado regarding Pirámide. Results show a contribution of glacial outflow to runoff during 2015 of 55% in Tapado and 77% in Pirámide, with maximum contributions between January and March in Tapado and Pirámide between November and March, presenting the relevance of the permanence of snow cover during spring and shelter that provides debris-covered in reducing the melting glacier. The results have allowed to know the relevance of the glacier contribution to mountain streams, allowing to know the calibration parameters most relevant in the hydrology balance of glacier basins in the Andes.

  12. Astrophysical properties of star clusters in the Magellanic Clouds homogeneously estimated by ASteCA

    NASA Astrophysics Data System (ADS)

    Perren, G. I.; Piatti, A. E.; Vázquez, R. A.

    2017-06-01

    Aims: We seek to produce a homogeneous catalog of astrophysical parameters of 239 resolved star clusters, located in the Small and Large Magellanic Clouds, observed in the Washington photometric system. Methods: The cluster sample was processed with the recently introduced Automated Stellar Cluster Analysis (ASteCA) package, which ensures both an automatized and a fully reproducible treatment, together with a statistically based analysis of their fundamental parameters and associated uncertainties. The fundamental parameters determined for each cluster with this tool, via a color-magnitude diagram (CMD) analysis, are metallicity, age, reddening, distance modulus, and total mass. Results: We generated a homogeneous catalog of structural and fundamental parameters for the studied cluster sample and performed a detailed internal error analysis along with a thorough comparison with values taken from 26 published articles. We studied the distribution of cluster fundamental parameters in both Clouds and obtained their age-metallicity relationships. Conclusions: The ASteCA package can be applied to an unsupervised determination of fundamental cluster parameters, which is a task of increasing relevance as more data becomes available through upcoming surveys. A table with the estimated fundamental parameters for the 239 clusters analyzed is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/602/A89

  13. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  14. Smart manufacturing of complex shaped pipe components

    NASA Astrophysics Data System (ADS)

    Salchak, Y. A.; Kotelnikov, A. A.; Sednev, D. A.; Borikov, V. N.

    2018-03-01

    Manufacturing industry is constantly improving. Nowadays the most relevant trend is widespread automation and optimization of the production process. This paper represents a novel approach for smart manufacturing of steel pipe valves. The system includes two main parts: mechanical treatment and quality assurance units. Mechanical treatment is performed by application of the milling machine with implementation of computerized numerical control, whilst the quality assurance unit contains three testing modules for different tasks, such as X-ray testing, optical scanning and ultrasound testing modules. The advances of each of them provide reliable results that contain information about any failures of the technological process, any deviations of geometrical parameters of the valves. The system also allows detecting defects on the surface or in the inner structure of the component.

  15. The acoustic radiation force on a heated (or cooled) rigid sphere - Theory

    NASA Technical Reports Server (NTRS)

    Lee, C. P.; Wang, T. G.

    1984-01-01

    A finite amplitude sound wave can exert a radiation force on an object due to second-order effect of the wave field. The radiation force on a rigid small sphere (i.e., in the long wavelength limit), which has a temperature different from that of the environment, is presently studied. This investigation assumes no thermally induced convection and is relevant to material processing in the absence of gravity. Both isotropic and nonisotropic temperature profiles are considered. In this calculation, the acoustic effect and heat transfer process are essentially decoupled because of the long wavelength limit. The heat transfer information required for determining the force is contained in the parameters, which are integrals over the temperature distribution.

  16. Callus remodelling model

    NASA Astrophysics Data System (ADS)

    Miodowska, Justyna; Bielski, Jan; Kromka-Szydek, Magdalena

    2018-01-01

    The objective of this paper is to investigate the healing process of the callus using bone remodelling approach. A new mathematical model of bone remodelling is proposed including both underload and overload resorption, as well as equilibrium and bone growth states. The created model is used to predict the stress-stimulated change in the callus density. The permanent and intermittent loading programs are considered. The analyses indicate that obtaining a sufficiently high values of the callus density (and hence the elasticity) modulus is only possible using time-varying load parameters. The model predictions also show that intermittent loading program causes delayed callus healing. Understanding how mechanical conditions influence callus remodelling process may be relevant in the bone fracture treatment and initial bone loading during rehabilitation.

  17. Parametric studies on droplet generation reproducibility for applications with biological relevant fluids

    PubMed Central

    Eichler, Marko; Römer, Robert; Grodrian, Andreas; Lemke, Karen; Nagel, Krees; Klages, Claus‐Peter; Gastrock, Gunter

    2017-01-01

    Abstract Although the great potential of droplet based microfluidic technologies for routine applications in industry and academia has been successfully demonstrated over the past years, its inherent potential is not fully exploited till now. Especially regarding to the droplet generation reproducibility and stability, two pivotally important parameters for successful applications, there is still a need for improvement. This is even more considerable when droplets are created to investigate tissue fragments or cell cultures (e.g. suspended cells or 3D cell cultures) over days or even weeks. In this study we present microfluidic chips composed of a plasma coated polymer, which allow surfactants‐free, highly reproducible and stable droplet generation from fluids like cell culture media. We demonstrate how different microfluidic designs and different flow rates (and flow rate ratios) affect the reproducibility of the droplet generation process and display the applicability for a wide variety of bio(techno)logically relevant media. PMID:29399017

  18. Flux estimation of the FIFE planetary boundary layer (PBL) with 10.6 micron Doppler lidar

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1990-01-01

    A method is devised for calculating wind, momentum, and other flux parameters that characterize the planetary boundary layer (PBL) and thereby facilitate the calibration of spaceborne vs. in situ flux estimates. Single Doppler lidar data are used to estimate the variance of the mean wind and the covariance related to the vertically pointing fluxes of horizontal momentum. The skewness of the vertical velocity and the range of kinetic energy dissipation are also estimated, and the surface heat flux is determined by means of a statistical Navier-Stokes equation. The conclusion shows that the PBL structure combines both 'bottom-up' and 'top-down' processes suggesting that the relevant parameters for the atmospheric boundary layer be revised. The conclusions are of significant interest to the modeling techniques used in General Circulation Models as well as to flux estimation.

  19. Liquid-phase microextraction combined with graphite furnace atomic absorption spectrometry: A review.

    PubMed

    de la Calle, Inmaculada; Pena-Pereira, Francisco; Lavilla, Isela; Bendicho, Carlos

    2016-09-14

    An overview of the combination of liquid-phase microextraction (LPME) techniques with graphite furnace atomic absorption spectrometry (GFAAS) is reported herein. The high sensitivity of GFAAS is significantly enhanced by its association with a variety of miniaturized solvent extraction approaches. LPME-GFAAS thus represents a powerful combination for determination of metals, metalloids and organometallic compounds at (ultra)trace level. Different LPME modes used with GFAAS are briefly described, and the experimental parameters that show an impact in those microextraction processes are discussed. Special attention is paid to those parameters affecting GFAAS analysis. Main issues found when coupling LPME and GFAAS, as well as those strategies reported in the literature to solve them, are summarized. Relevant applications published on the topic so far are included. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Transport spectroscopy of induced superconductivity in the three-dimensional topological insulator HgTe

    NASA Astrophysics Data System (ADS)

    Wiedenmann, Jonas; Liebhaber, Eva; Kübert, Johannes; Bocquillon, Erwann; Burset, Pablo; Ames, Christopher; Buhmann, Hartmut; Klapwijk, Teun M.; Molenkamp, Laurens W.

    2017-10-01

    The proximity-induced superconducting state in the three-dimensional topological insulator HgTe has been studied using electronic transport of a normal metal-superconducting point contact as a spectroscopic tool (Andreev point-contact spectroscopy). By analyzing the conductance as a function of voltage for various temperatures, magnetic fields, and gate voltages, we find evidence, in equilibrium, for an induced order parameter in HgTe of 70 µeV and a niobium order parameter of 1.1 meV. To understand the full conductance curve as a function of applied voltage we suggest a non-equilibrium-driven transformation of the quantum transport process where the relevant scattering region and equilibrium reservoirs change with voltage. This change implies that the spectroscopy probes the superconducting correlations at different positions in the sample, depending on the bias voltage.

  1. Aerodynamical Probation Of Semi-Industrial Production Plant For Centrifugal Dust Collectors’ Efficiency Research

    NASA Astrophysics Data System (ADS)

    Buligin, Y. I.; Zharkova, M. G.; Alexeenko, L. N.

    2017-01-01

    In previous studies, experiments were carried out on the small-size models of cyclonic units, but now there completed the semi-industrial pilot plant ≪Cyclone≫, which would allow comparative testing of real samples of different shaped centrifugal dust-collectors and compare their efficiency. This original research plant is patented by authors. The aim of the study is to improve efficiency of exhaust gases collecting process, by creating improved designs of centrifugal dust collectors, providing for the possibility of regulation constructive parameters depending on the properties and characteristics of air-fuel field. The objectives of the study include identifying and studying the cyclonic apparatus association constructive parameters with their aerodynamic characteristics and dust-collecting efficiency. The article is very relevant, especially for future practical application of its results in dust removal technology.

  2. [Clinical relevance of periodic limb movements during sleep in obstructive sleep apnea patients].

    PubMed

    Iriarte, J; Alegre, M; Irimia, P; Urriza, J; Artieda, J

    The periodic limb movements disorder (PLMD) is frequently associated with the obstructive sleep apnea syndrome (OSAS), but the prevalence and clinical relevance of this association have not been studied in detail. The objectives were to make a prospective study on the prevalence of PLMD in patients with OSAS, and correlate this association with clinical and respiratory parameters. Forty-two patients diagnosed with OSAS, without clinical suspicion of PLMD, underwent a polysomnographic study. Clinical symptoms and signs were evaluated with an structured questionnaire, and respiratory parameters were obtained from the nocturnal study. Periodic limb movements were found in 10 patients (24%). There were no differences in clinical parameters between both groups (with and without periodical limb movements). However, respiratory parameters were significantly worse in patients without PLMD. PLMD is very frequent in patients with OSAS, and can contribute to worsen clinical signs and symptoms in these patients independently from respiratory parameters.

  3. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  4. Prediction of the Fate of Organic Compounds in the Environment From Their Molecular Properties: A Review

    PubMed Central

    Mamy, Laure; Patureau, Dominique; Barriuso, Enrique; Bedos, Carole; Bessac, Fabienne; Louchart, Xavier; Martin-laurent, Fabrice; Miege, Cecile; Benoit, Pierre

    2015-01-01

    A comprehensive review of quantitative structure-activity relationships (QSAR) allowing the prediction of the fate of organic compounds in the environment from their molecular properties was done. The considered processes were water dissolution, dissociation, volatilization, retention on soils and sediments (mainly adsorption and desorption), degradation (biotic and abiotic), and absorption by plants. A total of 790 equations involving 686 structural molecular descriptors are reported to estimate 90 environmental parameters related to these processes. A significant number of equations was found for dissociation process (pKa), water dissolution or hydrophobic behavior (especially through the KOW parameter), adsorption to soils and biodegradation. A lack of QSAR was observed to estimate desorption or potential of transfer to water. Among the 686 molecular descriptors, five were found to be dominant in the 790 collected equations and the most generic ones: four quantum-chemical descriptors, the energy of the highest occupied molecular orbital (EHOMO) and the energy of the lowest unoccupied molecular orbital (ELUMO), polarizability (α) and dipole moment (μ), and one constitutional descriptor, the molecular weight. Keeping in mind that the combination of descriptors belonging to different categories (constitutional, topological, quantum-chemical) led to improve QSAR performances, these descriptors should be considered for the development of new QSAR, for further predictions of environmental parameters. This review also allows finding of the relevant QSAR equations to predict the fate of a wide diversity of compounds in the environment. PMID:25866458

  5. Prediction of the Fate of Organic Compounds in the Environment From Their Molecular Properties: A Review.

    PubMed

    Mamy, Laure; Patureau, Dominique; Barriuso, Enrique; Bedos, Carole; Bessac, Fabienne; Louchart, Xavier; Martin-Laurent, Fabrice; Miege, Cecile; Benoit, Pierre

    2015-06-18

    A comprehensive review of quantitative structure-activity relationships (QSAR) allowing the prediction of the fate of organic compounds in the environment from their molecular properties was done. The considered processes were water dissolution, dissociation, volatilization, retention on soils and sediments (mainly adsorption and desorption), degradation (biotic and abiotic), and absorption by plants. A total of 790 equations involving 686 structural molecular descriptors are reported to estimate 90 environmental parameters related to these processes. A significant number of equations was found for dissociation process (pK a ), water dissolution or hydrophobic behavior (especially through the K OW parameter), adsorption to soils and biodegradation. A lack of QSAR was observed to estimate desorption or potential of transfer to water. Among the 686 molecular descriptors, five were found to be dominant in the 790 collected equations and the most generic ones: four quantum-chemical descriptors, the energy of the highest occupied molecular orbital (E HOMO ) and the energy of the lowest unoccupied molecular orbital (E LUMO ), polarizability (α) and dipole moment (μ), and one constitutional descriptor, the molecular weight. Keeping in mind that the combination of descriptors belonging to different categories (constitutional, topological, quantum-chemical) led to improve QSAR performances, these descriptors should be considered for the development of new QSAR, for further predictions of environmental parameters. This review also allows finding of the relevant QSAR equations to predict the fate of a wide diversity of compounds in the environment.

  6. Portable kit for the assessment of gait parameters in daily telerehabilitation.

    PubMed

    Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Grigioni, Mauro

    2013-03-01

    When designing a complete process of daily telerehabilitation, it should be borne in mind that patients should be furnished with properly designed methodologies for executing specific motion tasks and the assessment of the relevant parameters. In general, such a process should comprehend three basic elements in both the hospital and the home: (a) instrumented walkways, (b) walking aids or supports, and (c) equipment for the assessment of parameters. The objective, with gait being the focus, of this study was thus to design a simple, portable kit-as an alternative to the complex and expensive instruments currently used-to be easily interfaced or integrated with the instrumented walkways and aids/supports both for self-monitoring while patients are exercising with their own aids and for clinical reporting. The proposed system is a portable kit that furnishes useful parameters with feedback to both the patient and the trainer/therapist. Capable of being integrated with the most common mechanical tools used in motion rehabilitation (handrail, scales, walkways, etc.), it constantly monitors and quantitatively assesses progress in rehabilitation care. It is composed of one step counter, photo-emitter detectors, one central unit for collecting and processing the telemetrically transmitted data, and a software interface. The system has been successfully validated on 16 subjects at the second level of the Tinetti test in a clinical application for both home and the hospital. The portable kit can be used with different rehabilitation tools and on varying ground rugosity. Advantages include (a) very low cost, when compared with optoelectronic solutions or other portable devices, (b) very high accuracy, also for subjects with imbalance problems, compared with other commercial solutions, and (c) integration (compatibility) with any rehabilitative tool.

  7. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  8. Optimal Cytoplasmic Transport in Viral Infections

    PubMed Central

    D'Orsogna, Maria R.; Chou, Tom

    2009-01-01

    For many viruses, the ability to infect eukaryotic cells depends on their transport through the cytoplasm and across the nuclear membrane of the host cell. During this journey, viral contents are biochemically processed into complexes capable of both nuclear penetration and genomic integration. We develop a stochastic model of viral entry that incorporates all relevant aspects of transport, including convection along microtubules, biochemical conversion, degradation, and nuclear entry. Analysis of the nuclear infection probabilities in terms of the transport velocity, degradation, and biochemical conversion rates shows how certain values of key parameters can maximize the nuclear entry probability of the viral material. The existence of such “optimal” infection scenarios depends on the details of the biochemical conversion process and implies potentially counterintuitive effects in viral infection, suggesting new avenues for antiviral treatment. Such optimal parameter values provide a plausible transport-based explanation of the action of restriction factors and of experimentally observed optimal capsid stability. Finally, we propose a new interpretation of how genetic mutations unrelated to the mechanism of drug action may nonetheless confer novel types of overall drug resistance. PMID:20046829

  9. Essential Annotation Schema for Ecology (EASE)—A framework supporting the efficient data annotation and faceted navigation in ecology

    PubMed Central

    Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines. PMID:29023519

  10. Lignin Depolymerization with Nitrate-Intercalated Hydrotalcite Catalysts

    DOE PAGES

    Kruger, Jacob S.; Cleveland, Nicholas S.; Zhang, Shuting; ...

    2016-01-13

    Hydrotalcites (HTCs) exhibit multiple adjustable parameters to tune catalytic activity, including interlayer anion composition, metal hydroxide layer composition, and catalyst preparation methods. Here in this paper, we report the influence of several of these parameters on β-O-4 bond scission in a lignin model dimer, 2-phenoxy-1-phenethanol (PE), to yield phenol and acetophenone. We find that the presence of both basic and NO 3– anions in the interlayer increases the catalyst activity by 2–3-fold. In contrast, other anions or transition metals do not enhance catalytic activity in comparison to blank HTC. The catalyst is not active for C–C bond cleavage on ligninmore » model dimers and has no effect on dimers without an α-OH group. Most importantly, the catalyst is highly active in the depolymerization of two process-relevant lignin substrates, producing a significant amount of low-molecular-weight aromatic species. The catalyst can be recycled until the NO 3– anions are depleted, after which the activity can be restored by replenishing the NO 3– reservoir and regenerating the hydrated HTC structure. These results demonstrate a route to selective lignin depolymerization in a heterogeneous system with an inexpensive, earth-abundant, commercially relevant, and easily regenerated catalyst.« less

  11. Experimental quantum simulations of many-body physics with trapped ions.

    PubMed

    Schneider, Ch; Porras, Diego; Schaetz, Tobias

    2012-02-01

    Direct experimental access to some of the most intriguing quantum phenomena is not granted due to the lack of precise control of the relevant parameters in their naturally intricate environment. Their simulation on conventional computers is impossible, since quantum behaviour arising with superposition states or entanglement is not efficiently translatable into the classical language. However, one could gain deeper insight into complex quantum dynamics by experimentally simulating the quantum behaviour of interest in another quantum system, where the relevant parameters and interactions can be controlled and robust effects detected sufficiently well. Systems of trapped ions provide unique control of both the internal (electronic) and external (motional) degrees of freedom. The mutual Coulomb interaction between the ions allows for large interaction strengths at comparatively large mutual ion distances enabling individual control and readout. Systems of trapped ions therefore exhibit a prominent system in several physical disciplines, for example, quantum information processing or metrology. Here, we will give an overview of different trapping techniques of ions as well as implementations for coherent manipulation of their quantum states and discuss the related theoretical basics. We then report on the experimental and theoretical progress in simulating quantum many-body physics with trapped ions and present current approaches for scaling up to more ions and more-dimensional systems.

  12. A methodological approach to characterize the resilience of aquatic ecosystems with application to Lake Annecy, France

    NASA Astrophysics Data System (ADS)

    Pinault, J.-L.; Berthier, F.

    2007-01-01

    We propose a methodological approach to characterize the resilience of aquatic ecosystems with respect to the evolution of environmental parameters as well as their aptitude to adapt to forcings. This method that is applied to Lake Annecy, France, proceeds in three stages. First, according to the depth, variations of physicochemical parameters versus time are separated into three components related to (1) energy transfer through the surface of the lake, (2) the flow of rivers and springs that feed the lake, and (3) long-term evolution of the benthic zone as a consequence of mineral and organic matter loads. Second, dynamics of the lake are deduced by analyzing the physicochemical parameter components related to the three boundary conditions. Third, a stochastic process associated with the transfer models aims to characterize the resilience of the lakes according to forcings. For Lake Annecy, whose dynamics are representative of oligotrophic stratified lakes controlled by decarbonation processes where turnover and mixing occurring once a year in winter, the major consequence is the impoverishment of dissolved oxygen in deep water in autumn due to a temperature increase of the surface water in summer. The simulation raises relevant questions about whether a connection exists between physicochemical parameters and global warming, which should not induce harmful consequences on water quality and biodiversity in deep water. This methodological approach is general since it does not use any physical conceptual model to predict the hydrosystem behavior but uses directly observed data.

  13. Atmospheric new particle formation at the research station Melpitz, Germany: connection with gaseous precursors and meteorological parameters

    NASA Astrophysics Data System (ADS)

    Größ, Johannes; Hamed, Amar; Sonntag, André; Spindler, Gerald; Elina Manninen, Hanna; Nieminen, Tuomo; Kulmala, Markku; Hõrrak, Urmas; Plass-Dülmer, Christian; Wiedensohler, Alfred; Birmili, Wolfram

    2018-02-01

    This paper revisits the atmospheric new particle formation (NPF) process in the polluted Central European troposphere, focusing on the connection with gas-phase precursors and meteorological parameters. Observations were made at the research station Melpitz (former East Germany) between 2008 and 2011 involving a neutral cluster and air ion spectrometer (NAIS). Particle formation events were classified by a new automated method based on the convolution integral of particle number concentration in the diameter interval 2-20 nm. To study the relevance of gaseous sulfuric acid as a precursor for nucleation, a proxy was derived on the basis of direct measurements during a 1-month campaign in May 2008. As a major result, the number concentration of freshly produced particles correlated significantly with the concentration of sulfur dioxide as the main precursor of sulfuric acid. The condensation sink, a factor potentially inhibiting NPF events, played a subordinate role only. The same held for experimentally determined ammonia concentrations. The analysis of meteorological parameters confirmed the absolute need for solar radiation to induce NPF events and demonstrated the presence of significant turbulence during those events. Due to its tight correlation with solar radiation, however, an independent effect of turbulence for NPF could not be established. Based on the diurnal evolution of aerosol, gas-phase, and meteorological parameters near the ground, we further conclude that the particle formation process is likely to start in elevated parts of the boundary layer rather than near ground level.

  14. An unscaled parameter to measure the order of surfaces: a new surface elaboration to increase cells adhesion.

    PubMed

    Bigerelle, M; Anselme, K; Dufresne, E; Hardouin, P; Iost, A

    2002-08-01

    We present a new parameter to quantify the order of a surface. This parameter is scale-independent and can be used to compare the organization of a surface at different scales of range and amplitude. To test the accuracy of this roughness parameter versus a hundred existing ones, we created an original statistical bootstrap method. In order to assess the physical relevance of this new parameter, we elaborated a great number of surfaces with various roughness amplitudes on titanium and titanium-based alloys using different physical processes. Then we studied the influence of the roughness amplitude on in vitro adhesion and proliferation of human osteoblasts. It was then shown that our new parameter best discriminates among the cell adhesion phenomena than others' parameters (Average roughness (Ra em leader )): cells adhere better on isotropic surfaces with a low order, provided this order is quantified on a scale that is more important than that of the cells. Additionally, on these low ordered metallic surfaces, the shape of the cells presents the same morphological aspect as that we can see on the human bone trabeculae. The method used to prepare these isotropic surfaces (electroerosion) could be undoubtedly and easily applied to prepare most biomaterials with complex geometries and to improve bone implant integration. Moreover, the new order parameter we developed may be particularly useful for the fundamental understanding of the mechanism of bone cell installation on a relief and of the formation of bone cell-material interface.

  15. MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit

    PubMed Central

    Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer

    2012-01-01

    MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188

  16. MOCAT: a metagenomics assembly and gene prediction toolkit.

    PubMed

    Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer

    2012-01-01

    MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.

  17. Mathematical Modeling of Ammonia Electro-Oxidation on Polycrystalline Pt Deposited Electrodes

    NASA Astrophysics Data System (ADS)

    Diaz Aldana, Luis A.

    The ammonia electrolysis process has been proposed as a feasible way for electrochemical generation of fuel grade hydrogen (H2). Ammonia is identified as one of the most suitable energy carriers due to its high hydrogen density, and its safe and efficient distribution chain. Moreover, the fact that this process can be applied even at low ammonia concentration feedstock opens its application to wastewater treatment along with H 2 co-generation. In the ammonia electrolysis process, ammonia is electro-oxidized in the anode side to produce N2 while H2 is evolved from water reduction in the cathode. A thermodynamic energy requirement of just five percent of the energy used in hydrogen production from water electrolysis is expected from ammonia electrolysis. However, the absence of a complete understanding of the reaction mechanism and kinetics involved in the ammonia electro-oxidation has not yet allowed the full commercialization of this process. For that reason, a kinetic model that can be trusted in the design and scale up of the ammonia electrolyzer needs to be developed. This research focused on the elucidation of the reaction mechanism and kinetic parameters for the ammonia electro-oxidation. The definition of the most relevant elementary reactions steps was obtained through the parallel analysis of experimental data and the development of a mathematical model of the ammonia electro-oxidation in a well defined hydrodynamic system, such as the rotating disk electrode (RDE). Ammonia electro-oxidation to N 2 as final product was concluded to be a slow surface confined process where parallel reactions leading to the deactivation of the catalyst are present. Through the development of this work it was possible to define a reaction mechanism and values for the kinetic parameters for ammonia electro-oxidation that allow an accurate representation of the experimental observations on a RDE system. Additionally, the validity of the reaction mechanism and kinetic parameters were supplemented by means of process scale up, performance evaluation, and hydrodynamic analysis in a flow cell electrolyzer. An adequate simulation of the flow electrolyzer performance was accomplished using the obtained kinetic parameters.

  18. Relationship between peroxyacetyl nitrate and nitrogen oxides in the clean troposphere

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; Salas, L. J.; Ridley, B. A.; Shetter, J. D.; Donahue, N. M.

    1985-01-01

    The first study is presented in which the mixing ratios of peroxyactyl nitrate (PAN) and nitrogen oxides, as well as those of peroxypropionyl nitrate and O3 and relevant meteorological parameters, were measured concurrently at a location that receives clean, continental air. The results show that, in clean conditions, nitrogen oxides present in the form of PAN can be as much or more abundant than the inorganic form. In addition, PAN can be an important source of peroxyacetyl radicals which may be important to oxidation processes in the gas as well as liquid phases.

  19. On magnetohydrodynamic thermal instabilities in magnetic flux tubes. [in plane parallel stellar atmosphere in LTE and hydrostatic equilibrium

    NASA Technical Reports Server (NTRS)

    Massaglia, S.; Ferrari, A.; Bodo, G.; Kalkofen, W.; Rosner, R.

    1985-01-01

    The stability of current-driven filamentary modes in magnetic flux tubes embedded in a plane-parallel atmosphere in LTE and in hydrostatic equilibrium is discussed. Within the tube, energy transport by radiation only is considered. The dominant contribution to the opacity is due to H- ions and H atoms (in the Paschen continuum). A region in the parameter space of the equilibrium configuration in which the instability is effective is delimited, and the relevance of this process for the formation of structured coronae in late-type stars and accretion disks is discussed.

  20. Molecular profiles to biology and pathways: a systems biology approach.

    PubMed

    Van Laere, Steven; Dirix, Luc; Vermeulen, Peter

    2016-06-16

    Interpreting molecular profiles in a biological context requires specialized analysis strategies. Initially, lists of relevant genes were screened to identify enriched concepts associated with pathways or specific molecular processes. However, the shortcoming of interpreting gene lists by using predefined sets of genes has resulted in the development of novel methods that heavily rely on network-based concepts. These algorithms have the advantage that they allow a more holistic view of the signaling properties of the condition under study as well as that they are suitable for integrating different data types like gene expression, gene mutation, and even histological parameters.

  1. Toward understanding dynamic annealing processes in irradiated ceramics

    NASA Astrophysics Data System (ADS)

    Myers, Michael Thomas

    High energy particle irradiation inevitably generates defects in solids in the form of collision cascades. The ballistic formation and thermalization of cascades occur rapidly and are believed to be reasonably well understood. However, knowledge of the evolution of defects after damage cascade thermalization, referred to as dynamic annealing, is quite limited. Unraveling the mechanisms associated with dynamic an- nealing is crucial since such processes play an important role in the formation of stable post-irradiation disorder in ion-beam-processed semiconductors and determines the "radiation tolerance" of many nuclear materials. The purpose of this dissertation is to further our understanding of the processes involved in dynamic annealing. In order to achieve this, two main tasks are undertaken. First, the effects of dynamic annealing are investigated in ZnO, a technologically relevant material that exhibits very high dynamic defect annealing at room temper- ature. Such high dynamic annealing leads to unusual defect accumulation in heavy ion bombarded ZnO. Through this work, the puzzling features that were observed more than a decade ago in ion-channeling spectra have finally been explained. We show that the presence of a polar surface substantially alters damage accumulation. Non-polar surface terminations of ZnO are shown to exhibit enhanced dynamic an- nealing compared to polar surface terminated ZnO. Additionally, we demonstrate one method to reduce radiation damage in polar surface terminated ZnO by means of a surface modification. These results advance our efforts in the long-sought-after goal of understanding complex radiation damage processes in ceramics. Second, a pulsed-ion-beam method is developed and demonstrated in the case of Si as a prototypical non-metallic target. Such a method is shown to be a novel experimental technique for direct extraction of dynamic annealing parameters. The relaxation times and effective diffusion lengths of mobile defects during the dynamic annealing process play a vital role in damage accumulation. We demonstrate that these parameters dominate the formation of stable post-irradiation disorder. In Si, a defect lifetime of ˜ 6 ms and a characteristic defect diffusion length of ˜ 30 nm are measured. These results should nucleate future pulsed-beam studies of dynamic defect interaction processes in technologically relevant materials. In particular, un- derstanding length- and time-scales of defect interactions are essential for extending laboratory findings to nuclear material lifetimes and to the time-scales of geological storage of nuclear waste.

  2. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners

    PubMed Central

    Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.

    2015-01-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101

  3. Influences of surface charge, size, and concentration of colloidal nanoparticles on fabrication of self-organized porous silica in film and particle forms.

    PubMed

    Nandiyanto, Asep Bayu Dani; Suhendi, Asep; Arutanti, Osi; Ogi, Takashi; Okuyama, Kikuo

    2013-05-28

    Studies on preparation of porous material have attracted tremendous attention because existence of pores can provide material with excellent performances. However, current preparation reports described successful production of porous material with only partial information on charges, interactions, sizes, and compositions of the template and host materials. In this report, influences of self-assembly parameters (i.e., surface charge, size, and concentration of colloidal nanoparticles) on self-organized porous material fabrication were investigated. Silica nanoparticles (as a host material) and polystyrene (PS) spheres (as a template) were combined to produce self-assembly porous materials in film and particle forms. The experimental results showed that the porous structure and pore size were controllable and strongly depended on the self-assembly parameters. Materials containing highly ordered pores were effectively created only when process parameters fall within appropriate conditions (i.e., PS surface charge ≤ -30 mV; silica-to-PS size ratio ≤0.078; and silica-to-PS mass ratio of about 0.50). The investigation of the self-assembly parameter landscape was also completed using geometric considerations. Because optimization of these parameters provides significant information in regard to practical uses, results of this report could be relevant to other functional properties.

  4. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  5. Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks

    PubMed Central

    Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin

    2018-01-01

    Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444

  6. Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks.

    PubMed

    Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin

    2018-01-01

    Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.

  7. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    PubMed

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  8. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen [Knoxville, TN; Kalinin, Sergei V [Knoxville, TN

    2010-08-17

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  9. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V

    2013-05-28

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  10. Containerless processing of undercooled melts

    NASA Technical Reports Server (NTRS)

    Perepezko, J. H.

    1993-01-01

    The investigation focused on the control of microstructural evolution in Mn-Al, Fe-Ni, Ni-V, and Au-Pb-Sb alloys through the high undercooling levels provided by containerless processing, and provided fundamental new information on the control of nucleation. Solidification analysis was conducted by means of thermal analysis, x-ray diffraction, and metallographic characterization on samples processed in a laboratory scale drop tube system. The Mn-Al alloy system offers a useful model system with the capability of phase separation on an individual particle basis, thus permitting a more complete understanding of the operative kinetics and the key containerless processing variables. This system provided the opportunity of analyzing the nucleation rate as a function of processing conditions and allowed for the quantitative assessment of the relevant processing parameters. These factors are essential in the development of a containerless processing model which has a predictive capability. Similarly, Ni-V is a model system that was used to study duplex partitionless solidification, which is a structure possible only in high under cooling solidification processes. Nucleation kinetics for the competing bcc and fcc phases were studied to determine how this structure can develop and the conditions under which it may occur. The Fe-Ni alloy system was studied to identify microstructural transitions with controlled variations in sample size and composition during containerless solidification. This work was forwarded to develop a microstructure map which delineates regimes of structural evolution and provides a unified analysis of experimental observations. The Au-Pb-Sb system was investigated to characterize the thermodynamic properties of the undercooled liquid phase and to characterize the glass transition under a variety of processing conditions. By analyzing key containerless processing parameters in a ground based drop tube study, a carefully designed flight experiment may be planned to utilize the extended duration microgravity conditions of orbiting spacecraft.

  11. Isolation and preservation of peripheral blood mononuclear cells for analysis of islet antigen-reactive T cell responses: position statement of the T-Cell Workshop Committee of the Immunology of Diabetes Society.

    PubMed

    Mallone, R; Mannering, S I; Brooks-Worrell, B M; Durinovic-Belló, I; Cilio, C M; Wong, F S; Schloot, N C

    2011-01-01

    Autoimmune T cell responses directed against insulin-producing β cells are central to the pathogenesis of type 1 diabetes (T1D). Detection of such responses is therefore critical to provide novel biomarkers for T1D 'immune staging' and to understand the mechanisms underlying the disease. While different T cell assays are being developed for these purposes, it is important to optimize and standardize methods for processing human blood samples for these assays. To this end, we review data relevant to critical parameters in peripheral blood mononuclear cell (PBMC) isolation, (cryo)preservation, distribution and usage for detecting antigen-specific T cell responses. Based on these data, we propose recommendations on processing blood samples for T cell assays and identify gaps in knowledge that need to be addressed. These recommendations may be relevant not only for the analysis of T cell responses in autoimmune disease, but also in cancer and infectious disease, particularly in the context of clinical trials. © 2010 The Authors. Clinical and Experimental Immunology © 2010 British Society for Immunology.

  12. Cellular Particle Dynamics simulation of biomechanical relaxation processes of multi-cellular systems

    NASA Astrophysics Data System (ADS)

    McCune, Matthew; Kosztin, Ioan

    2013-03-01

    Cellular Particle Dynamics (CPD) is a theoretical-computational-experimental framework for describing and predicting the time evolution of biomechanical relaxation processes of multi-cellular systems, such as fusion, sorting and compression. In CPD, cells are modeled as an ensemble of cellular particles (CPs) that interact via short range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through numerical integration of their equations of motion. Here we present CPD simulation results for the fusion of both spherical and cylindrical multi-cellular aggregates. First, we calibrate the relevant CPD model parameters for a given cell type by comparing the CPD simulation results for the fusion of two spherical aggregates to the corresponding experimental results. Next, CPD simulations are used to predict the time evolution of the fusion of cylindrical aggregates. The latter is relevant for the formation of tubular multi-cellular structures (i.e., primitive blood vessels) created by the novel bioprinting technology. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  13. Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco

    The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.

  14. Effects of Coherence and Relevance on Shallow and Deep Text Processing.

    ERIC Educational Resources Information Center

    Lehman, Stephen; Schraw, Gregory

    2002-01-01

    Examines the effects of coherence and relevance on shallow and deeper text processing, testing the hypothesis that enhancing the relevance of text segments compensates for breaks in local and global coherence. Results reveal that breaks in local coherence had no effect on any outcome measures, whereas relevance enhanced deeper processing.…

  15. Simple Heat Treatment for Production of Hot-Dip Galvanized Dual Phase Steel Using Si-Al Steels

    NASA Astrophysics Data System (ADS)

    Equihua-Guillén, F.; García-Lara, A. M.; Muñíz-Valdes, C. R.; Ortíz-Cuellar, J. C.; Camporredondo-Saucedo, J. E.

    2014-01-01

    This work presents relevant metallurgical considerations to produce galvanized dual phase steels from low cost aluminum-silicon steels which are produced by continuous strip processing. Two steels with different contents of Si and Al were austenized in the two-phase field ferrite + austenite (α + γ) in a fast manner to obtain dual phase steels, suitable for hot-dip galvanizing process, under typical parameters of continuous annealing processing line. Tensile dual phase properties were obtained from specimens cooled from temperature below Ar3, held during 3 min, intermediate cooling at temperature above Ar1 and quenching in Zn bath at 465 °C. The results have shown typical microstructure and tensile properties of galvanized dual phase steels. Finally, the synergistic effect of aluminum, silicon, and residual chromium on martensite start temperature ( M s), critical cooling rate ( C R), volume fraction of martensite, and tensile properties has been studied.

  16. Effects of in-sewer processes: a stochastic model approach.

    PubMed

    Vollertsen, J; Nielsen, A H; Yang, W; Hvitved-Jacobsen, T

    2005-01-01

    Transformations of organic matter, nitrogen and sulfur in sewers can be simulated taking into account the relevant transformation and transport processes. One objective of such simulation is the assessment and management of hydrogen sulfide formation and corrosion. Sulfide is formed in the biofilms and sediments of the water phase, but corrosion occurs on the moist surfaces of the sewer gas phase. Consequently, both phases and the transport of volatile substances between these phases must be included. Furthermore, wastewater composition and transformations in sewers are complex and subject to high, natural variability. This paper presents the latest developments of the WATS model concept, allowing integrated aerobic, anoxic and anaerobic simulation of the water phase and of gas phase processes. The resulting model is complex and with high parameter variability. An example applying stochastic modeling shows how this complexity and variability can be taken into account.

  17. Maximum efficiency of the collisional Penrose process

    NASA Astrophysics Data System (ADS)

    Zaslavskii, O. B.

    2016-09-01

    We consider the collision of two particles that move in the equatorial plane near a general stationary rotating axially symmetric extremal black hole. One of the particles is critical (with fine-tuned parameters) and moves in the outward direction. The second particle (usual, not fine-tuned) comes from infinity. We examine the efficiency η of the collisional Penrose process. There are two relevant cases here: a particle falling into a black hole after collision (i) is heavy or (ii) has a finite mass. We show that the maximum of η in case (ii) is less than or equal to that in case (i). It is argued that for superheavy particles, the bound applies to nonequatorial motion as well. As an example, we analyze collision in the Kerr-Newman background. When the bound is the same for processes (i) and (ii), η =3 for this metric. For the Kerr black hole, recent results in the literature are reproduced.

  18. Characterization of Nanoparticle Release from Surface Coatings by the Simulation of a Sanding Process

    PubMed Central

    Göhler, Daniel; Stintz, Michael; Hillemann, Lars; Vorbau, Manuel

    2010-01-01

    Nanoparticles are used in industrial and domestic applications to control customized product properties. But there are several uncertainties concerning possible hazard to health safety and environment. Hence, it is necessary to search for methods to analyze the particle release from typical application processes. Based on a survey of commercial sanding machines, the relevant sanding process parameters were employed for the design of a miniature sanding test setup in a particle-free environment for the quantification of the nanoparticle release into air from surface coatings. The released particles were moved by a defined airflow to a fast mobility particle sizer and other aerosol measurement equipment to enable the determination of released particle numbers additionally to the particle size distribution. First, results revealed a strong impact of the coating material on the swarf mass and the number of released particles. PMID:20696941

  19. Determination of Parameters for the Supercritical Extraction of Antioxidant Compounds from Green Propolis Using Carbon Dioxide and Ethanol as Co-Solvent

    PubMed Central

    Barreto, Gabriele de Abreu; Costa, Samantha Serra; Silva, Rejane Pina Dantas; da Silva, Danielle Figuerêdo; Brandão, Hugo Neves; da Rocha, José Luiz Carneiro; Nunes, Silmar Baptista; Umsza-Guez, Marcelo Andres

    2015-01-01

    The aim of this study was to determine the best processing conditions to extract Brazilian green propolis using a supercritical extraction technology. For this purpose, the influence of different parameters was evaluated such as S/F (solvent mass in relation to solute mass), percentage of co-solvent (1 and 2% ethanol), temperature (40 and 50°C) and pressure (250, 350 and 400 bar) using supercritical carbon dioxide. The Global Yield Isotherms (GYIs) were obtained through the evaluation of the yield, and the chemical composition of the extracts was also obtained in relation to the total phenolic compounds, flavonoids, antioxidant activity and 3,5-diprenyl-4-hydroxicinnamic acid (Artepillin C) and acid 4-hydroxycinnamic (p-coumaric acid). The best results were identified at 50°C, 350 bar, 1% ethanol (co-solvent) and S/F of 110. These conditions, a content of 8.93±0.01 and 0.40±0.05 g/100 g of Artepillin C and p-coumaric acid, respectively, were identified indicating the efficiency of the extraction process. Despite of low yield of the process, the extracts obtained had high contents of relevant compounds, proving the viability of the process to obtain green propolis extracts with important biological applications due to the extracts composition. PMID:26252491

  20. Process quality engineering for bioreactor-driven manufacturing of tissue-engineered constructs for bone regeneration.

    PubMed

    Papantoniou Ir, Ioannis; Chai, Yoke Chin; Luyten, Frank P; Schrooten Ir, Jan

    2013-08-01

    The incorporation of Quality-by-Design (QbD) principles in tissue-engineering bioprocess development toward clinical use will ensure that manufactured constructs possess prerequisite quality characteristics addressing emerging regulatory requirements and ensuring the functional in vivo behavior. In this work, the QbD principles were applied on a manufacturing process step for the in vitro production of osteogenic three-dimensional (3D) hybrid scaffolds that involves cell matrix deposition on a 3D titanium (Ti) alloy scaffold. An osteogenic cell source (human periosteum-derived cells) cultured in a bioinstructive medium was used to functionalize regular Ti scaffolds in a perfusion bioreactor, resulting in an osteogenic hybrid carrier. A two-level three-factor fractional factorial design of experiments was employed to explore a range of production-relevant process conditions by simultaneously changing value levels of the following parameters: flow rate (0.5-2 mL/min), cell culture duration (7-21 days), and cell-seeding density (1.5×10(3)-3×10(3) cells/cm(2)). This approach allowed to evaluate the individual impact of the aforementioned process parameters upon key quality attributes of the produced hybrids, such as collagen production, mineralization level, and cell number. The use of a fractional factorial design approach helped create a design space in which hybrid scaffolds of predefined quality attributes may be robustly manufactured while minimizing the number of required experiments.

  1. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    NASA Astrophysics Data System (ADS)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  2. Analysis of large-scale tablet coating: Modeling, simulation and experiments.

    PubMed

    Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G

    2016-07-30

    This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimal experimental design for parameter estimation of a cell signaling model.

    PubMed

    Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-11-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.

  4. Characterize Eruptive Processes at Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Valentine

    2001-12-20

    This Analysis/Model Report (AMR), ''Characterize Eruptive Processes at Yucca Mountain, Nevada'', presents information about natural volcanic systems and the parameters that can be used to model their behavior. This information is used to develop parameter-value distributions appropriate for analysis of the consequences of volcanic eruptions through a potential repository at Yucca Mountain. Many aspects of this work are aimed at resolution of the Igneous Activity Key Technical Issue (KTI) as identified by the Nuclear Regulatory Commission (NRC 1998, p. 3), Subissues 1 and 2, which address the probability and consequence of igneous activity at the proposed repository site, respectively. Withinmore » the framework of the Disruptive Events Process Model Report (PMR), this AMR provides information for the calculations in two other AMRs ; parameters described herein are directly used in calculations in these reports and will be used in Total System Performance Assessment (TSPA). Compilation of this AMR was conducted as defined in the Development Plan, except as noted. The report begins with considerations of the geometry of volcanic feeder systems, which are of primary importance in predicting how much of a potential repository would be affected by an eruption. This discussion is followed by one of the physical and chemical properties of the magmas, which influences both eruptive styles and mechanisms for interaction with radioactive waste packages. Eruptive processes including the ascent velocity of magma at depth, the onset of bubble nucleation and growth in the rising magmas, magma fragmentation, and velocity of the resulting gas-particle mixture are then discussed. The duration of eruptions, their power output, and mass discharge rates are also described. The next section summarizes geologic constraints regarding the interaction between magma and waste packages. Finally, they discuss bulk grain size produced by relevant explosive eruptions and grain shapes.« less

  5. Comparison of start-up strategies and process performance during semi-continuous anaerobic digestion of sugarcane filter cake co-digested with bagasse.

    PubMed

    Janke, Leandro; Leite, Athaydes F; Nikolausz, Marcell; Radetski, Claudemir M; Nelles, Michael; Stinner, Walter

    2016-02-01

    The anaerobic digestion of sugarcane filter cake and the option of co-digestion with bagasse were investigated in a semi-continuous feeding regime to assess the main parameters used for large-scale process designing. Moreover, fresh cattle manure was considered as alternative inoculum for the start-up of biogas reactors in cases where digestate from a biogas plant would not be available in remote rural areas. Experiments were carried out in 6 lab-scale semi-continuous stirred-tank reactors at mesophilic conditions (38±1°C) while the main anaerobic digestion process parameters monitored. Fresh cattle manure demonstrated to be appropriate for the start-up process. However, an acclimation period was required due to the high initial volatile fatty acids concentration (8.5gL(-1)). Regardless the mono-digestion of filter cake presented 50% higher biogas yield (480mLgVS(-1)) than co-digestion with bagasse (320mLgVS(-1)) during steady state conditions. A large-scale co-digestion system would produce 58% more biogas (1008m(3)h(-1)) than mono-digestion of filter cake (634m(3)h(-1)) due to its higher biomass availability for biogas conversion. Considering that the biogas production rate was the technical parameter that displayed the most relevant differences between the analyzed substrate options (0.99-1.45m(3)biogasm(3)d(-1)). The decision of which substrate option should be implemented in practice would be mainly driven by the available construction techniques, since economically efficient tanks could compensate the lower biogas production rate of co-digestion option. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Towards a consensus-based biokinetic model for green microalgae - The ASM-A.

    PubMed

    Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy

    2016-10-15

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The U.S. national nuclear forensics library, nuclear materials information program, and data dictionary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamont, Stephen Philip; Brisson, Marcia; Curry, Michael

    2011-02-17

    Nuclear forensics assessments to determine material process history requires careful comparison of sample data to both measured and modeled nuclear material characteristics. Developing centralized databases, or nuclear forensics libraries, to house this information is an important step to ensure all relevant data will be available for comparison during a nuclear forensics analysis and help expedite the assessment of material history. The approach most widely accepted by the international community at this time is the implementation of National Nuclear Forensics libraries, which would be developed and maintained by individual nations. This is an attractive alternative toan international database since it providesmore » an understanding that each country has data on materials produced and stored within their borders, but eliminates the need to reveal any proprietary or sensitive information to other nations. To support the concept of National Nuclear Forensics libraries, the United States Department of Energy has developed a model library, based on a data dictionary, or set of parameters designed to capture all nuclear forensic relevant information about a nuclear material. Specifically, information includes material identification, collection background and current location, analytical laboratories where measurements were made, material packaging and container descriptions, physical characteristics including mass and dimensions, chemical and isotopic characteristics, particle morphology or metallurgical properties, process history including facilities, and measurement quality assurance information. While not necessarily required, it may also be valuable to store modeled data sets including reactor burn-up or enrichment cascade data for comparison. It is fully expected that only a subset of this information is available or relevant to many materials, and much of the data populating a National Nuclear Forensics library would be process analytical or material accountability measurement data as opposed to a complete forensic analysis of each material in the library.« less

  8. Kinetic energy density and agglomerate abrasion rate during blending of agglomerates into powders.

    PubMed

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Tran, Thanh N; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    2012-01-23

    Problems related to the blending of a cohesive powder with a free flowing bulk powder are frequently encountered in the pharmaceutical industry. The cohesive powder often forms lumps or agglomerates which are not dispersed during the mixing process and are therefore detrimental to blend uniformity. Achieving sufficient blend uniformity requires that the blending conditions are able to break up agglomerates, which is often an abrasion process. This study was based on the assumption that the abrasion rate of agglomerates determines the required blending time. It is shown that the kinetic energy density of the moving powder bed is a relevant parameter which correlates with the abrasion rate of agglomerates. However, aspects related to the strength of agglomerates should also be considered. For this reason the Stokes abrasion number (St(Abr)) has been defined. This parameter describes the ratio between the kinetic energy density of the moving powder bed and the work of fracture of the agglomerate. The St(Abr) number is shown to predict the abrasion potential of agglomerates in the dry-mixing process. It appeared possible to include effects of filler particle size and impeller rotational rate into this concept. A clear relationship between abrasion rate of agglomerates and the value of St(Abr) was demonstrated. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Integrated modelling of crop production and nitrate leaching with the Daisy model.

    PubMed

    Manevski, Kiril; Børgesen, Christen D; Li, Xiaoxin; Andersen, Mathias N; Abrahamsen, Per; Hu, Chunsheng; Hansen, Søren

    2016-01-01

    An integrated modelling strategy was designed and applied to the Soil-Vegetation-Atmosphere Transfer model Daisy for simulation of crop production and nitrate leaching under pedo-climatic and agronomic environment different than that of model original parameterisation. The points of significance and caution in the strategy are: •Model preparation should include field data in detail due to the high complexity of the soil and the crop processes simulated with process-based model, and should reflect the study objectives. Inclusion of interactions between parameters in a sensitivity analysis results in better account for impacts on outputs of measured variables.•Model evaluation on several independent data sets increases robustness, at least on coarser time scales such as month or year. It produces a valuable platform for adaptation of the model to new crops or for the improvement of the existing parameters set. On daily time scale, validation for highly dynamic variables such as soil water transport remains challenging. •Model application is demonstrated with relevance for scientists and regional managers. The integrated modelling strategy is applicable for other process-based models similar to Daisy. It is envisaged that the strategy establishes model capability as a useful research/decision-making, and it increases knowledge transferability, reproducibility and traceability.

  10. Slumped glass optics for x-ray telescopes: advances in the hot slumping assisted by pressure

    NASA Astrophysics Data System (ADS)

    Salmaso, B.; Brizzolari, C.; Basso, S.; Civitani, M.; Ghigo, M.; Pareschi, G.; Spiga, D.; Tagliaferri, G.; Vecchi, G.

    2015-09-01

    Slumped Glass Optics is a viable solution to build future X-ray telescopes. In our laboratories we use a direct hot slumping approach assisted by pressure, in which the glass optical surface is in contact with the mould, and a pressure is applied to enforce the replication of the mould shape on the glass optical surface. Several prototypes have been already produced and tested in X-rays, showing a continuous improvement in our technology. In this paper, we present the advances in our technology, in terms of slumped glass foils quality and expected performances upon an ideal integration. By using Eagle XG glass foils and Zerodur K20 for the slumping mould, we have fine tuned several process parameters: we present a critical analysis correlating the changes in the process to the improvements in different spatial frequency ranges encompassing the profile and roughness measurements. The use of a re-polished K20 mould, together with the optimized process parameters, lead to the latest result of glass foils with expected performance of less than 3 arcsec in single reflection at 1 keV X-ray energy. This work presents all the relevant steps forward in the hot slumping technology assisted by pressure, aimed at reaching angular resolutions of 5 arcsec for the whole mirror assembly.

  11. Micropropagation of Prunus species relevant to cherry fruit production.

    PubMed

    Druart, Philippe

    2013-01-01

    Cherry tree micropropagation is limited to the production of healthy cultivars of Prunus avium and Prunus cerasus, and their rootstocks; mainly the dwarfing ones. By using meristem-tip (0.1 mm long) or healthy shoot tips/nodes, four successive steps are needed to obtain whole plants capable of growing in the nursery: multiplication by axillary branching, shoot elongation, rooting, and plantlet acclimation. Along this process, several parameters have to be adjusted for each phase of the culture, including media composition, environmental culture conditions and plant handling. These parameters vary depending on genotypic response and specific vulnerability to physiological disorders such as hyperhydricity, apex necrosis, unstable propagation, and rooting rates. Based on a 40 year-long experience of study and application of culture conditions to large-scale plant production, this document summarizes the main problems (variability of the propagation rate, hyperhydricity, apex necrosis, plant re-growth) and solutions encountered to solve them, with means validated on many mericlones.

  12. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  13. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  14. Electron-impact Multiple-ionization Cross Sections for Atoms and Ions of Helium through Zinc

    NASA Astrophysics Data System (ADS)

    Hahn, M.; Müller, A.; Savin, D. W.

    2017-12-01

    We compiled a set of electron-impact multiple-ionization (EIMI) cross section for astrophysically relevant ions. EIMIs can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature or if there is a non-thermal electron energy distribution, such as a kappa distribution. Cross section for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to available experimental EIMI cross-section data. Based on this comparison, we interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for 3466 EIMI cross sections and the associated Maxwellian plasma rate coefficients. We also highlight some outstanding issues that remain to be resolved.

  15. A model for acoustic vaporization dynamics of a bubble/droplet system encapsulated within a hyperelastic shell.

    PubMed

    Lacour, Thomas; Guédra, Matthieu; Valier-Brasier, Tony; Coulouvrat, François

    2018-01-01

    Nanodroplets have great, promising medical applications such as contrast imaging, embolotherapy, or targeted drug delivery. Their functions can be mechanically activated by means of focused ultrasound inducing a phase change of the inner liquid known as the acoustic droplet vaporization (ADV) process. In this context, a four-phases (vapor + liquid + shell + surrounding environment) model of ADV is proposed. Attention is especially devoted to the mechanical properties of the encapsulating shell, incorporating the well-known strain-softening behavior of Mooney-Rivlin material adapted to very large deformations of soft, nearly incompressible materials. Various responses to ultrasound excitation are illustrated, depending on linear and nonlinear mechanical shell properties and acoustical excitation parameters. Different classes of ADV outcomes are exhibited, and a relevant threshold ensuring complete vaporization of the inner liquid layer is defined. The dependence of this threshold with acoustical, geometrical, and mechanical parameters is also provided.

  16. Communication: Vibrational relaxation of CO(1Σ) in collision with Ar(1S) at temperatures relevant to the hypersonic flight regime.

    PubMed

    Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus

    2017-03-21

    Vibrational energy relaxation (VER) of diatomics following collisions with the surrounding medium is an important elementary process for modeling high-temperature gas flow. VER is characterized by two parameters: the vibrational relaxation time τ vib and the state relaxation rates. Here the vibrational relaxation of CO(ν=0←ν=1) in Ar is considered for validating a computational approach to determine the vibrational relaxation time parameter (pτ vib ) using an accurate, fully dimensional potential energy surface. For lower temperatures, comparison with experimental data shows very good agreement whereas at higher temperatures (up to 25 000 K), comparisons with an empirically modified model due to Park confirm its validity for CO in Ar. Additionally, the calculations provide insight into the importance of Δν>1 transitions that are ignored in typical applications of the Landau-Teller framework.

  17. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  18. STS-9 BET products

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Heck, M. L.; Mcconnell, J. G.; Henry, M. W.

    1984-01-01

    The final products generated for the STS-9, which landed on December 8, 1983 are reported. The trajectory reconstruction utilized an anchor epoch of GMT corresponding to an initial altitude of h 356 kft, selected in view of the limited tracking coverage available. The final state utilized IMU2 measurements and was based on processing radar tracking from six C-bands and a single S-band station, plus six photo-theodolite cameras in the vicinity of Runway 17 at Edwards Air Force Base. The final atmosphere (FLAIR9/UN=581199C) was based on a composite of the remote measured data and the 1978 Air Force Reference Atmosphere model. The Extended BET is available as STS9BET/UN=274885C. The AEROBET and MMLE input files created are discussed. Plots of the more relevant parameters from the AEROBET (reel number NL0624) are included. Input parameters, final residual plots, a trajectory listing, and data archival information are defined.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko

    We present methods to compute higher orders in the threshold expansion for the one-loop production of a Higgs boson in association with two partons at hadron colliders. This process contributes to the N 3LO Higgs production cross section beyond the soft-virtual approximation. We use reverse unitarity to expand the phase-space integrals in the small kinematic parameters and to reduce the coefficients of the expansion to a small set of master integrals. We describe two methods for the calculation of the master integrals. The first was introduced for the calculation of the soft triple-real radiation relevant to N 3LO Higgs production.more » The second uses a particular factorization of the three body phase-space measure and the knowledge of the scaling properties of the integral itself. Our result is presented as a Laurent expansion in the dimensional regulator, although some of the master integrals are computed to all orders in this parameter.« less

  20. Development of flat-plate solar collectors for the heating and cooling of buildings

    NASA Technical Reports Server (NTRS)

    Ramsey, J. W.; Borzoni, J. T.; Holland, T. H.

    1975-01-01

    The relevant design parameters in the fabrication of a solar collector for heating liquids were examined. The objective was to design, fabricate, and test a low-cost, flat-plate solar collector with high collection efficiency, high durability, and requiring little maintenance. Computer-aided math models of the heat transfer processes in the collector assisted in the design. The preferred physical design parameters were determined from a heat transfer standpoint and the absorber panel configuration, the surface treatment of the absorber panel, the type and thickness of insulation, and the number, spacing and material of the covers were defined. Variations of this configuration were identified, prototypes built, and performance tests performed using a solar simulator. Simulated operation of the baseline collector configuration was combined with insolation data for a number of locations and compared with a predicted load to determine the degree of solar utilization.

  1. A Simple fMRI Compatible Robotic Stimulator to Study the Neural Mechanisms of Touch and Pain.

    PubMed

    Riillo, F; Bagnato, C; Allievi, A G; Takagi, A; Fabrizi, L; Saggio, G; Arichi, T; Burdet, E

    2016-08-01

    This paper presents a simple device for the investigation of the human somatosensory system with functional magnetic imaging (fMRI). PC-controlled pneumatic actuation is employed to produce innocuous or noxious mechanical stimulation of the skin. Stimulation patterns are synchronized with fMRI and other relevant physiological measurements like electroencephalographic activity and vital physiological parameters. The system allows adjustable regulation of stimulation parameters and provides consistent patterns of stimulation. A validation experiment demonstrates that the system safely and reliably identifies clusters of functional activity in brain regions involved in the processing of pain. This new device is inexpensive, portable, easy-to-assemble and customizable to suit different experimental requirements. It provides robust and consistent somatosensory stimulation, which is of crucial importance to investigating the mechanisms of pain and its strong connection with the sense of touch.

  2. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  3. Optimization of an angle-beam ultrasonic approach for characterization of impact damage in composites

    NASA Astrophysics Data System (ADS)

    Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David

    2018-04-01

    Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.

  4. An automated design process for short pulse laser driven opacity experiments

    DOE PAGES

    Martin, M. E.; London, R. A.; Goluoglu, S.; ...

    2017-12-21

    Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less

  5. An automated design process for short pulse laser driven opacity experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M. E.; London, R. A.; Goluoglu, S.

    Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less

  6. Endogenous Magnetic Reconnection in Solar Coronal Loops

    NASA Astrophysics Data System (ADS)

    Asgari-Targhi, M.; Coppi, B.; Basu, B.; Fletcher, A.; Golub, L.

    2017-12-01

    We propose that a magneto-thermal reconnection process occurring in coronal loops be the source of the heating of the Solar Corona [1]. In the adopted model, magnetic reconnection is associated with electron temperature gradients, anisotropic electron temperature fluctuations and plasma current density gradients [2]. The input parameters for our theoretical model are derived from the most recent observations of the Solar Corona. In addition, the relevant (endogenous) collective modes can produce high energy particle populations. An endogenous reconnection process is defined as being driven by factors internal to the region where reconnection takes place. *Sponsored in part by the U.S. D.O.E. and the Kavli Foundation* [1] Beafume, P., Coppi, B. and Golub, L., (1992) Ap. J. 393, 396. [2] Coppi, B. and Basu, B. (2017) MIT-LNS Report HEP 17/01.

  7. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    NASA Astrophysics Data System (ADS)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of a reference model to study regional CZ sensitivity to global changes and will help to identify prior parameters required and to construct meta-models for deeper investigations of interactions within the CZ.

  8. Development of a thermally-assisted piercing (TAP) process for introducing holes into thermoplastic composites

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas W. A.

    Composite parts can be manufactured to near-net shape with minimum wastage of material; however, there is almost always a need for further machining. The most common post-manufacture machining operations for composite materials are to create holes for assembly. This thesis presents and discusses a thermally-assisted piercing process that can be used as a technique for introducing holes into thermoplastic composites. The thermally-assisted piercing process heats up, and locally melts, thermoplastic composites to allow material to be displaced around a hole, rather than cutting them out from the structure. This investigation was concerned with how the variation of piercing process parameters (such as the size of the heated area, the temperature of the laminate prior to piercing and the geometry of the piercing spike) changed the material microstructure within carbon fibre/Polyetheretherketone (PEEK) laminates. The variation of process parameters was found to significantly affect the formation of resin rich regions, voids and the fibre volume fraction in the material surrounding the hole. Mechanical testing (using open-hole tension, open-hole compression, plain-pin bearing and bolted bearing tests) showed that the microstructural features created during piercing were having significant influence over the resulting mechanical performance of specimens. By optimising the process parameters strength improvements of up to 11% and 21% were found for pierced specimens when compared with drilled specimens for open-hole tension and compression loading, respectively. For plain-pin and bolted bearing tests, maximum strengths of 77% and 85%, respectively, were achieved when compared with drilled holes. Improvements in first failure force (by 10%) and the stress at 4% hole elongation (by 18%), however, were measured for the bolted bearing tests when compared to drilled specimens. The overall performance of pierced specimens in an industrially relevant application ultimately depends on the properties required for that specific scenario. The results within this thesis show that the piercing technique could be used as a direct replacement to drilling depending on this application.

  9. Thermophilic versus Mesophilic Anaerobic Digestion of Sewage Sludge: A Comparative Review

    PubMed Central

    Gebreeyessus, Getachew D.; Jenicek, Pavel

    2016-01-01

    During advanced biological wastewater treatment, a huge amount of sludge is produced as a by-product of the treatment process. Hence, reuse and recovery of resources and energy from the sludge is a big technological challenge. The processing of sludge produced by Wastewater Treatment Plants (WWTPs) is massive, which takes up a big part of the overall operational costs. In this regard, anaerobic digestion (AD) of sewage sludge continues to be an attractive option to produce biogas that could contribute to the wastewater management cost reduction and foster the sustainability of those WWTPs. At the same time, AD reduces sludge amounts and that again contributes to the reduction of the sludge disposal costs. However, sludge volume minimization remains, a challenge thus improvement of dewatering efficiency is an inevitable part of WWTP operation. As a result, AD parameters could have significant impact on sludge properties. One of the most important operational parameters influencing the AD process is temperature. Consequently, the thermophilic and the mesophilic modes of sludge AD are compared for their pros and cons by many researchers. However, most comparisons are more focused on biogas yield, process speed and stability. Regarding the biogas yield, thermophilic sludge AD is preferred over the mesophilic one because of its faster biochemical reaction rate. Equally important but not studied sufficiently until now was the influence of temperature on the digestate quality, which is expressed mainly by the sludge dewateringability, and the reject water quality (chemical oxygen demand, ammonia nitrogen, and pH). In the field of comparison of thermophilic and mesophilic digestion process, few and often inconclusive research, unfortunately, has been published so far. Hence, recommendations for optimized technologies have not yet been done. The review presented provides a comparison of existing sludge AD technologies and the gaps that need to be filled so as to optimize the connection between the two systems. In addition, many other relevant AD process parameters, including sludge rheology, which need to be addressed, are also reviewed and presented. PMID:28952577

  10. The fairytale of the GSSG/GSH redox potential.

    PubMed

    Flohé, Leopold

    2013-05-01

    The term GSSG/GSH redox potential is frequently used to explain redox regulation and other biological processes. The relevance of the GSSG/GSH redox potential as driving force of biological processes is critically discussed. It is recalled that the concentration ratio of GSSG and GSH reflects little else than a steady state, which overwhelmingly results from fast enzymatic processes utilizing, degrading or regenerating GSH. A biological GSSG/GSH redox potential, as calculated by the Nernst equation, is a deduced electrochemical parameter based on direct measurements of GSH and GSSG that are often complicated by poorly substantiated assumptions. It is considered irrelevant to the steering of any biological process. GSH-utilizing enzymes depend on the concentration of GSH, not on [GSH](2), as is predicted by the Nernst equation, and are typically not affected by GSSG. Regulatory processes involving oxidants and GSH are considered to make use of mechanistic principles known for thiol peroxidases which catalyze the oxidation of hydroperoxides by GSH by means of an enzyme substitution mechanism involving only bimolecular reaction steps. The negligibly small rate constants of related spontaneous reactions as compared with enzyme-catalyzed ones underscore the superiority of kinetic parameters over electrochemical or thermodynamic ones for an in-depth understanding of GSH-dependent biological phenomena. At best, the GSSG/GSH potential might be useful as an analytical tool to disclose disturbances in redox metabolism. This article is part of a Special Issue entitled Cellular Functions of Glutathione. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.

    PubMed

    Şener, Duygu Dede; Oğul, Hasan

    2016-06-01

    Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.

  12. Characterisation of the physico-mechanical parameters of MSW.

    PubMed

    Stoltz, Guillaume; Gourc, Jean-Pierre; Oxarango, Laurent

    2010-01-01

    Following the basics of soil mechanics, the physico-mechanical behaviour of municipal solid waste (MSW) can be defined through constitutive relationships which are expressed with respect to three physical parameters: the dry density, the porosity and the gravimetric liquid content. In order to take into account the complexity of MSW (grain size distribution and heterogeneity larger than for conventional soils), a special oedometer was designed to carry out laboratory experiments. This apparatus allowed a coupled measurement of physical parameters for MSW settlement under stress. The studied material was a typical sample of fresh MSW from a French landfill. The relevant physical parameters were measured using a gas pycnometer. Moreover, the compressibility of MSW was studied with respect to the initial gravimetric liquid content. Proposed methods to assess the set of three physical parameters allow a relevant understanding of the physico-mechanical behaviour of MSW under compression, specifically, the evolution of the limit liquid content. The present method can be extended to any type of MSW. 2010 Elsevier Ltd. All rights reserved.

  13. Industrial activated sludge exhibit unique bacterial community composition at high taxonomic ranks.

    PubMed

    Ibarbalz, Federico M; Figuerola, Eva L M; Erijman, Leonardo

    2013-07-01

    Biological degradation of domestic and industrial wastewater by activated sludge depends on a common process of separation of the diverse self-assembled and self-sustained microbial flocs from the treated wastewater. Previous surveys of bacterial communities indicated the presence of a common core of bacterial phyla in municipal activated sludge, an observation consistent with the concept of ecological coherence of high taxonomic ranks. The aim of this work was to test whether this critical feature brings about a common pattern of abundance distribution of high bacterial taxa in industrial and domestic activated sludge, and to relate the bacterial community structure of industrial activated sludge with relevant operational parameters. We have applied 454 pyrosequencing of 16S rRNA genes to evaluate bacterial communities in full-scale biological wastewater treatment plants sampled at different times, including seven systems treating wastewater from different industries and one plant that treats domestic wastewater, and compared our datasets with the data from municipal wastewater treatment plants obtained by three different laboratories. We observed that each industrial activated sludge system exhibited a unique bacterial community composition, which is clearly distinct from the common profile of bacterial phyla or classes observed in municipal plants. The influence of process parameters on the bacterial community structure was evaluated using constrained analysis of principal coordinates (CAP). Part of the differences in the bacterial community structure between industrial wastewater treatment systems were explained by dissolved oxygen and pH. Despite the ecological relevance of floc formation for the assembly of bacterial communities in activated sludge, the wastewater characteristics are likely to be the major determinant that drives bacterial composition at high taxonomic ranks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. German dentists' websites on periodontitis have low quality of information.

    PubMed

    Schwendicke, Falk; Stange, Jörg; Stange, Claudia; Graetz, Christian

    2017-08-02

    The internet is an increasingly relevant source of health information. We aimed to assess the quality of German dentists' websites on periodontitis, hypothesizing that it was significantly associated with a number of practice-specific parameters. We searched four electronic search engines and included pages which were freely accessible, posted by a dental practice in Germany, and mentioned periodontal disease/therapy. Websites were assessed for (1) technical and functional aspects, (2) generic quality and risk of bias, (3) disease-specific information. For 1 and 2, validated tools (LIDA/DISCERN) were used for assessment. For 3, we developed a criterion catalogue encompassing items on etiologic and prognostic factors for periodontitis, the diagnostic and treatment process, and the generic chance of tooth retention in periodontitis patients. Inter- and intra-rater reliabilities were largely moderate. Generalized linear modeling was used to assess the association between the information quality (measured as % of maximally available scores) and practice-specific characteristics. Seventy-one websites were included. Technical and functional aspects were reported in significantly higher quality (median: 71%, 25/75th percentiles: 67/79%) than all other aspects (p < 0.05). Generic risk of bias and most disease-specific aspects showed significantly lower reporting quality (median range was 0-40%), with poorest reporting for prognostic factors (9;0/27%), diagnostic process (0;0/33%) and chances of tooth retention (0;0/2%). We found none of the practice-specific parameters to have significant impact on the overall quality of the websites. Most German dentists' websites on periodontitis are not fully trustworthy and relevant information are not or insufficiently considered. There is great need to improve the information quality from such websites at least with regards to periodontitis.

  15. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2009-03-01

    Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  16. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2008-12-01

    Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  17. From plastic to gold: a unified classification scheme for reference standards in medical image processing

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.

    2002-05-01

    Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.

  18. A consistent S-Adenosylmethionine force field improved by dynamic Hirshfeld-I atomic charges for biomolecular simulation

    NASA Astrophysics Data System (ADS)

    Saez, David Adrian; Vöhringer-Martinez, Esteban

    2015-10-01

    S-Adenosylmethionine (AdoMet) is involved in many biological processes as cofactor in enzymes transferring its sulfonium methyl group to various substrates. Additionally, it is used as drug and nutritional supplement to reduce the pain in osteoarthritis and against depression. Due to the biological relevance of AdoMet it has been part of various computational simulation studies and will also be in the future. However, to our knowledge no rigorous force field parameter development for its simulation in biological systems has been reported. Here, we use electronic structure calculations combined with molecular dynamics simulations in explicit solvent to develop force field parameters compatible with the AMBER99 force field. Additionally, we propose new dynamic Hirshfeld-I atomic charges which are derived from the polarized electron density of AdoMet in aqueous solution to describe its electrostatic interactions in biological systems. The validation of the force field parameters and the atomic charges is performed against experimental interproton NOE distances of AdoMet in aqueous solution and crystal structures of AdoMet in the cavity of three representative proteins.

  19. Graphene oxide based contacts as probes of biomedical signals

    NASA Astrophysics Data System (ADS)

    Hallfors, N. G.; Devarajan, A.; Farhat, I. A. H.; Abdurahman, A.; Liao, K.; Gater, D. L.; Elnaggar, M. I.; Isakovic, A. F.

    We have developed a series of graphene oxide (GOx) on polymer contacts and have demonstrated these to be useful for collection of standard biomedically relevant signals, such as electrocardiogram (ECG). The process is wet solution-based and allows for control and tuning of the basic physical parameters of GOx, such as electrical and optical properties, simply by choosing the number of GOx layers. Our GOx characterization measurements show spectral (FTIR, XPS, IR absorbance) features most relevant to such performance, and point towards the likely explanations about the mechanisms for controlling the physical properties relevant for the contact performance. Structural (X-ray topography) and surface characterization (AFM, SEM) indicates to what degree these contacts can be considered homogeneous and therefore provide information on yield and repeatability. We compare the ECG signals recorded by standard commercial probes (Ag/AgCl) and GOx probes, displaying minor differences the solution to which may lead to a whole new way we perform ECG data collection, including wearable electronics and IoT friendly ECG monitoring. We acknowledge support from Mubadala-SRC AC4ES and from SRC 2011-KJ-2190. We thank J. B. Warren and G. L. Carr (BNL) for assistance.

  20. Two phase modeling of the influence of plastic strain on the magnetic and magnetostrictive behaviors of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Hubert, Olivier; Lazreg, Said

    2017-02-01

    A growing interest of automotive industry in the use of high performance steels is observed. These materials are obtained thanks to complex manufacturing processes whose parameters fluctuations lead to strong variations of microstructure and mechanical properties. The on-line magnetic non-destructive monitoring is a relevant response to this problem but it requires fast models sensitive to different parameters of the forming process. The plastic deformation is one of these important parameters. Indeed, ferromagnetic materials are known to be sensitive to stress application and especially to plastic strains. In this paper, a macroscopic approach using the kinematic hardening is proposed to model this behavior, considering a plastic strained material as a two phase system. Relationship between kinematic hardening and residual stress is defined in this framework. Since stress fields are multiaxial, an uniaxial equivalent stress is calculated and introduced inside the so-called magneto-mechanical multidomain modeling to represent the effect of plastic strain. The modeling approach is complemented by many experiments involving magnetic and magnetostrictive measurements. They are carried out with or without applied stress, using a dual-phase steel deformed at different levels. The main interest of this material is that the mechanically hard phase, soft phase and the kinematic hardening can be clearly identified thanks to simple experiments. It is shown how this model can be extended to single phase materials.

  1. Influence of experimental conditions on data variability in the liver comet assay.

    PubMed

    Guérard, M; Marchand, C; Plappert-Helbig, U

    2014-03-01

    The in vivo comet assay has increasingly been used for regulatory genotoxicity testing in recent years. While it has been demonstrated that the experimental execution of the assay, for example, electrophoresis or scoring, can have a strong impact on the results; little is known on how initial steps, that is, from tissue sampling during necropsy up to slide preparation, can influence the comet assay results. Therefore, we investigated which of the multitude of steps in processing the liver for the comet assay are most critical. All together eight parameters were assessed by using liver samples of untreated animals. In addition, two of those parameters (temperature and storage time of liver before embedding into agarose) were further investigated in animals given a single oral dose of ethyl methanesulfonate at dose levels of 50, 100, and 200 mg/kg, 3 hr prior to necropsy. The results showed that sample cooling emerged as the predominant influence factor, whereas variations in other elements of the procedure (e.g., size of the liver piece sampled, time needed to process the liver tissue post-mortem, agarose temperature, or time of lysis) seem to be of little relevance. Storing of liver samples of up to 6 hr under cooled conditions did not cause an increase in tail intensity. In contrast, storing the tissue at room temperature, resulted in a considerable time-dependent increase in comet parameters. Copyright © 2013 Wiley Periodicals, Inc.

  2. Relevance similarity: an alternative means to monitor information retrieval systems

    PubMed Central

    Dong, Peng; Loh, Marie; Mondry, Adrian

    2005-01-01

    Background Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group. Methods The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic. Results The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set. Conclusion In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data. PMID:16029513

  3. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  4. Determinant impact of waste collection and composition on anaerobic digestion performance: industrial results.

    PubMed

    Saint-Joly, C; Desbois, S; Lotti, J P

    2000-01-01

    The performance of the anaerobic digestion process depends deeply on the quality of the waste to be treated. This has been already demonstrated at the lab-scale. The objective of this study is to confirm this result at the industrial scale, with very long representative period and with the same process, the Valorga process. According to the waste quality and the collection type and even with the same conditions of fermentation, the biogas yield can vary by a factor of 1.5 when it is expressed (under normal conditions of pressure and temperature) in m3 biogas/t fresh waste, and by a factor of 2 when it is expressed in m3 CH4/t volatile solids. So, the biogas performance does not characterise a process since it is deeply governed by waste composition. This biogas productivity becomes a pertinent parameter only with consistent and relevant hypothesis and/or analytical results on the waste composition which depends on the collection procedure, the site characteristics and the season.

  5. Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.

    PubMed

    Arend, Isabel; Aisenberg, Daniela; Henik, Avishai

    2016-10-01

    In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.

  6. A citrus waste-based biorefinery as a source of renewable energy: technical advances and analysis of engineering challenges.

    PubMed

    Rivas-Cantu, Raul C; Jones, Kim D; Mills, Patrick L

    2013-04-01

    An assessment of recent technical advances on pretreatment processes and its effects on enzymatic hydrolysis as the main steps of a proposed citrus processing waste (CPW) biorefinery is presented. Engineering challenges and relevant gaps in scientific and technical information for reliable design, modeling and scale up of a CPW biorefinery are also discussed. Some integrated physico-chemical pretreatments are proposed for testing for CPW, including high speed knife-grinding and simultaneous caustic addition. These new proposed processes and the effect of parameters such as particle size, surface area and morphology, pore volume and chemical composition of the diverse fractions resulting from pretreatment and enzymatic hydrolysis need to be evaluated and compared for pretreated and untreated samples of grapefruit processing waste. This assessment suggests the potential for filling the data gaps, and preliminary results demonstrate that the reduction of particle size and the increased surface area for the CPW will result in higher reaction rates and monosaccharide yields for the pretreated waste material.

  7. The reduction of dioxin emissions from the processes of heat and power generation.

    PubMed

    Wielgosiński, Grzegorz

    2011-05-01

    The first reports that it is possible to emit dioxins from the heat and power generation sector are from the beginning of the 1980s. Detailed research proved that the emission of dioxins might occur during combustion of hard coal, brown coal, and furnace oil as well as coke-oven gas. The emission of dioxins occurs in wood incineration; wood that is clean and understood as biomass; or, in particular, wood waste (polluted). This paper thoroughly discusses the mechanism of dioxin formation in thermal processes, first and foremost in combustion processes. The parameters influencing the quantity of dioxins formed and the dependence of their quantity on the conditions of combustion are highlighted. Furthermore, the methods of reducing dioxin emissions from combustion processes (primary and secondary) are discussed. The most efficacious methods that may find application in the heat and power generation sector are proposed; this is relevant from the point of view of the implementation of the Stockholm Convention resolutions in Poland with regard to persistent organic pollutants.

  8. Revisiting kinetic boundary conditions at the surface of fuel droplet hydrocarbons: An atomistic computational fluid dynamics simulation

    PubMed Central

    Nasiri, Rasoul

    2016-01-01

    The role of boundary conditions at the interface for both Boltzmann equation and the set of Navier-Stokes equations have been suggested to be important for studying of multiphase flows such as evaporation/condensation process which doesn’t always obey the equilibrium conditions. Here we present aspects of transition-state theory (TST) alongside with kinetic gas theory (KGT) relevant to the study of quasi-equilibrium interfacial phenomena and the equilibrium gas phase processes, respectively. A two-state mathematical model for long-chain hydrocarbons which have multi-structural specifications is introduced to clarify how kinetics and thermodynamics affect evaporation/condensation process at the surface of fuel droplet, liquid and gas phases and then show how experimental observations for a number of n-alkane may be reproduced using a hybrid framework TST and KGT with physically reasonable parameters controlling the interface, gas and liquid phases. The importance of internal activation dynamics at the surface of n-alkane droplets is established during the evaporation/condensation process. PMID:27215897

  9. Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2012-01-01

    An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.

  10. An Analytical Solution for Transient Thermal Response of an Insulated Structure

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2012-01-01

    An analytical solution was derived for the transient response of an insulated aerospace vehicle structure subjected to a simplified heat pulse. This simplified problem approximates the thermal response of a thermal protection system of an atmospheric entry vehicle. The exact analytical solution is solely a function of two non-dimensional parameters. A simpler function of these two parameters was developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Using these techniques, the maximum structural temperature rise was calculated using the analytical solutions and shown to typically agree with finite element simulations within 10 to 20 percent over the relevant range of parameters studied.

  11. Quantitative microscopy of the lung: a problem-based approach. Part 2: stereological parameters and study designs in various diseases of the respiratory tract.

    PubMed

    Mühlfeld, Christian; Ochs, Matthias

    2013-08-01

    Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.

  12. Effective lepton flavor violating H ℓiℓj vertex from right-handed neutrinos within the mass insertion approximation

    NASA Astrophysics Data System (ADS)

    Arganda, E.; Herrero, M. J.; Marcano, X.; Morales, R.; Szynkman, A.

    2017-05-01

    In this work we present a new computation of the lepton flavor violating Higgs boson decays that are generated radiatively to one-loop from heavy right-handed neutrinos. We work within the context of the inverse seesaw model with three νR and three extra singlets X , but the results could be generalized to other low scale seesaw models. The novelty of our computation is that it uses a completely different method by means of the mass insertion approximation which works with the electroweak interaction states instead of the usual 9 physical neutrino mass eigenstates of the inverse seesaw model. This method also allows us to write the analytical results explicitly in terms of the most relevant model parameters, that are the neutrino Yukawa coupling matrix Yν and the right-handed mass matrix MR, which is very convenient for a phenomenological analysis. This Yν matrix, being generically nondiagonal in flavor space, is the only one responsible for the induced charged lepton flavor violating processes of our interest. We perform the calculation of the decay amplitude up to order O (Yν2+Yν4). We also study numerically the goodness of the mass insertion approximation results. In the last part we present the computation of the relevant one-loop effective vertex H ℓiℓj for the lepton flavor violating Higgs decay which is derived from a large MR mass expansion of the form factors. We believe that our simple formula found for this effective vertex can be of interest for other researchers who wish to estimate the H →ℓiℓ¯j rates in a fast way in terms of their own preferred input values for the relevant model parameters Yν and MR.

  13. Electroencephalographic neurofeedback: Level of evidence in mental and brain disorders and suggestions for good clinical practice.

    PubMed

    Micoulaud-Franchi, J-A; McGonigal, A; Lopez, R; Daudet, C; Kotwas, I; Bartolomei, F

    2015-12-01

    The technique of electroencephalographic neurofeedback (EEG NF) emerged in the 1970s and is a technique that measures a subject's EEG signal, processes it in real time, extracts a parameter of interest and presents this information in visual or auditory form. The goal is to effectuate a behavioural modification by modulating brain activity. The EEG NF opens new therapeutic possibilities in the fields of psychiatry and neurology. However, the development of EEG NF in clinical practice requires (i) a good level of evidence of therapeutic efficacy of this technique, (ii) a good practice guide for this technique. Firstly, this article investigates selected trials with the following criteria: study design with controlled, randomized, and open or blind protocol, primary endpoint related to the mental and brain disorders treated and assessed with standardized measurement tools, identifiable EEG neurophysiological targets, underpinned by pathophysiological relevance. Trials were found for: epilepsies, migraine, stroke, chronic insomnia, attentional-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, major depressive disorder, anxiety disorders, addictive disorders, psychotic disorders. Secondly, this article investigates the principles of neurofeedback therapy in line with learning theory. Different underlying therapeutic models are presented didactically between two continua: a continuum between implicit and explicit learning and a continuum between the biomedical model (centred on "the disease") and integrative biopsychosocial model of health (centred on "the illness"). The main relevant learning model is to link neurofeedback therapy with the field of cognitive remediation techniques. The methodological specificity of neurofeedback is to be guided by biologically relevant neurophysiological parameters. Guidelines for good clinical practice of EEG NF concerning technical issues of electrophysiology and of learning are suggested. These require validation by institutional structures for the clinical practice of EEG NF. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  14. Quantum temporal probabilities in tunneling systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastopoulos, Charis, E-mail: anastop@physics.upatras.gr; Savvidou, Ntina, E-mail: ksavvidou@physics.upatras.gr

    We study the temporal aspects of quantum tunneling as manifested in time-of-arrival experiments in which the detected particle tunnels through a potential barrier. In particular, we present a general method for constructing temporal probabilities in tunneling systems that (i) defines ‘classical’ time observables for quantum systems and (ii) applies to relativistic particles interacting through quantum fields. We show that the relevant probabilities are defined in terms of specific correlation functions of the quantum field associated with tunneling particles. We construct a probability distribution with respect to the time of particle detection that contains all information about the temporal aspects ofmore » the tunneling process. In specific cases, this probability distribution leads to the definition of a delay time that, for parity-symmetric potentials, reduces to the phase time of Bohm and Wigner. We apply our results to piecewise constant potentials, by deriving the appropriate junction conditions on the points of discontinuity. For the double square potential, in particular, we demonstrate the existence of (at least) two physically relevant time parameters, the delay time and a decay rate that describes the escape of particles trapped in the inter-barrier region. Finally, we propose a resolution to the paradox of apparent superluminal velocities for tunneling particles. We demonstrate that the idea of faster-than-light speeds in tunneling follows from an inadmissible use of classical reasoning in the description of quantum systems. -- Highlights: •Present a general methodology for deriving temporal probabilities in tunneling systems. •Treatment applies to relativistic particles interacting through quantum fields. •Derive a new expression for tunneling time. •Identify new time parameters relevant to tunneling. •Propose a resolution of the superluminality paradox in tunneling.« less

  15. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results should also be relevant for whomever needed to perform spatial interpolation or out-of-sample extension (e.g. kriging), machine learning and feature detection, on geological data. We present procedural details but focus on high-level results that have real-world implications for the study of Venus, Earth, other planets, and moons.

  16. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred

    2016-03-01

    Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  17. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  18. (AC)3: A German Initiative to Study Arctic Amplification—Climate Relevant Atmospheric and Surface Processes and Feedback Mechanisms

    NASA Astrophysics Data System (ADS)

    Spreen, G.; Wendisch, M.; Brückner, M.

    2016-12-01

    Within the last 25 years a remarkable increase of the Arctic near-surface air temperature exceeding the global warming by a factor of at least two has been observed. This phenomenon is commonly referred to as Arctic Amplification. The warming results in rather dramatic changes of a variety of climate parameters. For example, the Arctic sea ice has declined significantly. This ice retreat has been well identified by satellite measurements. Over recent decades, significant progress has been made in two main scientific areas: (i) the capabilities of in-situ measurements and remote sensing techniques to observe key physico-chemical atmospheric constituents and surface parameters at high latitudes have advanced impressively, and (ii) the computational skills and power used to model individual feedback mechanisms on small scales have improved notably. It is, therefore, timely to exploit synergistically these new developments to enhance our knowledge of the origins of the observed Arctic climate changes. To achieve this aim a new Transregional Collaborative Research Center (TR 172) was launched in January 2016 called "ArctiC Amplification: Climate Relevant Atmospheric and SurfaCe Processes, and Feedback Mechanisms" with the acronym (AC)3. Observations from instrumentation on satellites, aircraft, tethered balloons, research vessels, and a selected set of ground-based sites will be integrated in dedicated campaigns, as well as being combined with long-term measurements. The field studies will be conducted in different seasons and meteorological conditions, covering a suitably wide range of spatial and temporal scales. They will be performed in an international context and in close collaboration with modelling activities. The latter utilize a hierarchy of process, meso-scale, regional, and global models to bridge the spatio-temporal scales from local individual processes to appropriate climate signals. The models will serve to guide the campaigns, to analyse the measurements and sensitivities, to facilitate the attribution of the origins of observed Arctic climate changes, and to test the ability of the models to reproduce observations. The presentation will give an overview of the scientific rationale, objectives, international links, and the work program of the (AC)³ project.

  19. A combined model of heat and mass transfer for the in situ extraction of volatile water from lunar regolith

    NASA Astrophysics Data System (ADS)

    Reiss, P.

    2018-05-01

    Chemical analysis of lunar soil samples often involves thermal processing to extract their volatile constituents, such as loosely adsorbed water. For the characterization of volatiles and their bonding mechanisms it is important to determine their desorption temperature. However, due to the low thermal diffusivity of lunar regolith, it might be difficult to reach a uniform heat distribution in a sample that is larger than only a few particles. Furthermore, the mass transport through such a sample is restricted, which might lead to a significant delay between actual desorption and measurable outgassing of volatiles from the sample. The entire volatiles extraction process depends on the dynamically changing heat and mass transfer within the sample, and is influenced by physical parameters such as porosity, tortuosity, gas density, temperature and pressure. To correctly interpret measurements of the extracted volatiles, it is important to understand the interaction between heat transfer, sorption, and gas transfer through the sample. The present paper discusses the molecular kinetics and mechanisms that are involved in the thermal extraction process and presents a combined parametrical computation model to simulate this process. The influence of water content on the gas diffusivity and thermal diffusivity is discussed and the issue of possible resorption of desorbed molecules within the sample is addressed. Based on the multi-physical computation model, a case study for the ProSPA instrument for in situ analysis of lunar volatiles is presented, which predicts relevant dynamic process parameters, such as gas pressure and process duration.

  20. Social and emotional relevance in face processing: happy faces of future interaction partners enhance the late positive potential

    PubMed Central

    Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.

    2014-01-01

    Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881

  1. Asymmetric flow field-flow fractionation in the field of nanomedicine.

    PubMed

    Wagner, Michael; Holzschuh, Stephan; Traeger, Anja; Fahr, Alfred; Schubert, Ulrich S

    2014-06-03

    Asymmetric flow field-flow fractionation (AF4) is a widely used and versatile technique in the family of field-flow fractionations, indicated by a rapidly increasing number of publications. It represents a gentle separation and characterization method, where nonspecific interactions are reduced to a minimum, allows a broad separation range from several nano- up to micrometers and enables a superior characterization of homo- and heterogenic systems. In particular, coupling to multiangle light scattering provides detailed access to sample properties. Information about molar mass, polydispersity, size, shape/conformation, or density can be obtained nearly independent of the used material. In this Perspective, the application and progress of AF4 for (bio)macromolecules and colloids, relevant for "nano" medical and pharmaceutical issues, will be presented. The characterization of different nanosized drug or gene delivery systems, e.g., polymers, nanoparticles, micelles, dendrimers, liposomes, polyplexes, and virus-like-particles (VLP), as well as therapeutic relevant proteins, antibodies, and nanoparticles for diagnostic usage will be discussed. Thereby, the variety of obtained information, the advantages and pitfalls of this emerging technique will be highlighted. Additionally, the influence of different fractionation parameters in the separation process is discussed in detail. Moreover, a comprehensive overview is given, concerning the investigated samples, fractionation parameters as membrane types and buffers used as well as the chosen detectors and the corresponding references. The perspective ends up with an outlook to the future.

  2. Langlands Parameters of Quivers in the Sato Grassmannian

    NASA Astrophysics Data System (ADS)

    Luu, Martin T.; Penciak, Matej

    2018-01-01

    Motivated by quantum field theoretic partition functions that can be expressed as products of tau functions of the KP hierarchy we attach several types of local geometric Langlands parameters to quivers in the Sato Grassmannian. We study related questions of Virasoro constraints, of moduli spaces of relevant quivers, and of classical limits of the Langlands parameters.

  3. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  4. Life cycle assessment and residue leaching: The importance of parameter, scenario and leaching data selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allegrini, E., E-mail: elia@env.dtu.dk; Butera, S.; Kosson, D.S.

    Highlights: • Relevance of metal leaching in waste management system LCAs was assessed. • Toxic impacts from leaching could not be disregarded. • Uncertainty of toxicity, due to background activities, determines LCA outcomes. • Parameters such as pH and L/S affect LCA results. • Data modelling consistency and coverage within an LCA are crucial. - Abstract: Residues from industrial processes and waste management systems (WMSs) have been increasingly reutilised, leading to landfilling rate reductions and the optimisation of mineral resource utilisation in society. Life cycle assessment (LCA) is a holistic methodology allowing for the analysis of systems and products andmore » can be applied to waste management systems to identify environmental benefits and critical aspects thereof. From an LCA perspective, residue utilisation provides benefits such as avoiding the production and depletion of primary materials, but it can lead to environmental burdens, due to the potential leaching of toxic substances. In waste LCA studies where residue utilisation is included, leaching has generally been neglected. In this study, municipal solid waste incineration bottom ash (MSWI BA) was used as a case study into three LCA scenarios having different system boundaries. The importance of data quality and parameter selection in the overall LCA results was evaluated, and an innovative method to assess metal transport into the environment was applied, in order to determine emissions to the soil and water compartments for use in an LCA. It was found that toxic impacts as a result of leaching were dominant in systems including only MSWI BA utilisation, while leaching appeared negligible in larger scenarios including the entire waste system. However, leaching could not be disregarded a priori, due to large uncertainties characterising other activities in the scenario (e.g. electricity production). Based on the analysis of relevant parameters relative to leaching, and on general results of the study, recommendations are provided regarding the use of leaching data in LCA studies.« less

  5. Accessing and Utilizing Remote Sensing Data for Vectorborne Infectious Diseases Surveillance and Modeling

    NASA Technical Reports Server (NTRS)

    Kiang, Richard; Adimi, Farida; Kempler, Steven

    2008-01-01

    Background: The transmission of vectorborne infectious diseases is often influenced by environmental, meteorological and climatic parameters, because the vector life cycle depends on these factors. For example, the geophysical parameters relevant to malaria transmission include precipitation, surface temperature, humidity, elevation, and vegetation type. Because these parameters are routinely measured by satellites, remote sensing is an important technological tool for predicting, preventing, and containing a number of vectorborne infectious diseases, such as malaria, dengue, West Nile virus, etc. Methods: A variety of NASA remote sensing data can be used for modeling vectorborne infectious disease transmission. We will discuss both the well known and less known remote sensing data, including Landsat, AVHRR (Advanced Very High Resolution Radiometer), MODIS (Moderate Resolution Imaging Spectroradiometer), TRMM (Tropical Rainfall Measuring Mission), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), EO-1 (Earth Observing One) ALI (Advanced Land Imager), and SIESIP (Seasonal to Interannual Earth Science Information Partner) dataset. Giovanni is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center. It provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data. After remote sensing data is obtained, a variety of techniques, including generalized linear models and artificial intelligence oriented methods, t 3 can be used to model the dependency of disease transmission on these parameters. Results: The processes of accessing, visualizing and utilizing precipitation data using Giovanni, and acquiring other data at additional websites are illustrated. Malaria incidence time series for some parts of Thailand and Indonesia are used to demonstrate that malaria incidences are reasonably well modeled with generalized linear models and artificial intelligence based techniques. Conclusions: Remote sensing data relevant to the transmission of vectorborne infectious diseases can be conveniently accessed at NASA and some other websites. These data are useful for vectorborne infectious disease surveillance and modeling.

  6. AOF LTAO mode: reconstruction strategy and first test results

    NASA Astrophysics Data System (ADS)

    Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin

    2016-07-01

    GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.

  7. Discretization analysis of bifurcation based nonlinear amplifiers

    NASA Astrophysics Data System (ADS)

    Feldkord, Sven; Reit, Marco; Mathis, Wolfgang

    2017-09-01

    Recently, for modeling biological amplification processes, nonlinear amplifiers based on the supercritical Andronov-Hopf bifurcation have been widely analyzed analytically. For technical realizations, digital systems have become the most relevant systems in signal processing applications. The underlying continuous-time systems are transferred to the discrete-time domain using numerical integration methods. Within this contribution, effects on the qualitative behavior of the Andronov-Hopf bifurcation based systems concerning numerical integration methods are analyzed. It is shown exemplarily that explicit Runge-Kutta methods transform the truncated normalform equation of the Andronov-Hopf bifurcation into the normalform equation of the Neimark-Sacker bifurcation. Dependent on the order of the integration method, higher order terms are added during this transformation.A rescaled normalform equation of the Neimark-Sacker bifurcation is introduced that allows a parametric design of a discrete-time system which corresponds to the rescaled Andronov-Hopf system. This system approximates the characteristics of the rescaled Hopf-type amplifier for a large range of parameters. The natural frequency and the peak amplitude are preserved for every set of parameters. The Neimark-Sacker bifurcation based systems avoid large computational effort that would be caused by applying higher order integration methods to the continuous-time normalform equations.

  8. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  9. In-depth quantitative analysis of the microstructures produced by Surface Mechanical Attrition Treatment (SMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samih, Y., E-mail: youssef.samih@univ-lorraine.fr; Université de Lorraine, Laboratory of Excellence on Design of Alloy Metals for low-mAss Structures; Beausir, B.

    2013-09-15

    Electron BackScattered Diffraction (EBSD) maps are used to characterize quantitatively the graded microstructure formed by Surface Mechanical Attrition Treatment (SMAT) and applied here to the 316L stainless steel. In particular, the analysis of GNDs – coupled with relevant and reliable criteria – was used to depict the thickness of each zone identified in the SMAT-affected layers: (i) the “ultrafine grain” (UFG) zone present at the extreme top surface, (ii), the “transition zone” where grains were fragmented under the heavy plastic deformation and, finally, (iii) the “deformed zone” where initial grains are simply deformed. The interest of this procedure is illustratedmore » through the comparative analysis of the effect of some SMAT processing parameters (amplitude of vibration and treatment duration). The UFG and transition zones are more significantly modified than the overall affected thickness under our tested conditions. - Highlights: • EBSD maps are used to characterize quantitatively the microstructure of SMAT treated samples. • Calculation of the GND density to quantify strain gradients • A new method to depict the different zone thicknesses in the SMAT affected layer • Effects of SMAT processing parameters on the surface microstructure evolution.« less

  10. Raman spectroscopic analysis of gunshot residue offering great potential for caliber differentiation.

    PubMed

    Bueno, Justin; Sikirzhytski, Vitali; Lednev, Igor K

    2012-05-15

    Near-infrared (NIR) Raman microspectroscopy combined with advanced statistics was used to differentiate gunshot residue (GSR) particles originating from different caliber ammunition. The firearm discharge process is analogous to a complex chemical reaction. The reagents of this process are represented by the chemical composition of the ammunition, firearm, and cartridge case. The specific firearm parameters determine the conditions of the reaction and thus the subsequent product, GSR. We found that Raman spectra collected from these products are characteristic for different caliber ammunition. GSR particles from 9 mm and 0.38 caliber ammunition, collected under identical discharge conditions, were used to demonstrate the capability of confocal Raman microspectroscopy for the discrimination and identification of GSR particles. The caliber differentiation algorithm is based on support vector machines (SVM) and partial least squares (PLS) discriminant analyses, validated by a leave-one-out cross-validation method. This study demonstrates for the first time that NIR Raman microspectroscopy has the potential for the reagentless differentiation of GSR based upon forensically relevant parameters, such as caliber size. When fully developed, this method should have a significant impact on the efficiency of crime scene investigations.

  11. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  12. Rapid Solidification in Bulk Ti-Nb Alloys by Single-Track Laser Melting

    NASA Astrophysics Data System (ADS)

    Roehling, John D.; Perron, Aurélien; Fattebert, Jean-Luc; Haxhimali, Tomorr; Guss, Gabe; Li, Tian T.; Bober, David; Stokes, Adam W.; Clarke, Amy J.; Turchi, Patrice E. A.; Matthews, Manyalibo J.; McKeown, Joseph T.

    2018-05-01

    Single-track laser melting experiments were performed on bulk Ti-Nb alloys to explore process parameters and the resultant macroscopic structure and microstructure. The microstructures in Ti-20Nb and Ti-50Nb (at.%) alloys exhibited cellular growth during rapid solidification, with average cell size of approximately 0.5 µm. Solidification velocities during cellular growth were calculated from images of melt tracks. Measurements of the composition in the cellular and intercellular regions revealed nonequilibrium partitioning and its dependence on velocity during rapid solidification. Experimental results were used to benchmark a phase-field model to describe rapid solidification under conditions relevant to additive manufacturing.

  13. Nuclear physics uncertainties of the astrophysical γ-process studied through the 64Zn(p,α)61Cu and 64Zn(p,γ)65Ga reactions

    NASA Astrophysics Data System (ADS)

    Gyürky, Gy.; Fülöp, Zs.; Halász, Z.; Kiss, G. G.; Szücs, T.

    2018-01-01

    In a recent work, the cross section measurement of the 64Zn(p,α)61Cu reaction was used to prove that the standard α-nucleus optical potentials used in astrophysical network calculation fail to reproduce the experimental data at energies relevant for heavy element nucleosynthesis. In the present paper the analysis of the obtained experimental data are continued by comparing the results with the predictions using different parameters. It is shown that the recently suggested modification of the standard optical potential leads to a better description of the data.

  14. Electronic dendrometer

    DOEpatents

    Sauer, deceased, Ronald H.; Beedlow, Peter A.

    1985-01-01

    Disclosed is a dendrometer for use on soft stemmed herbaceous plants. The dendrometer uses elongated jaws to engage the plant stem securely but without appreciable distortion or collapse of the stem. A transducer made of flexible, noncorrodible and temperature stable material spans between the jaws which engage the plant stem. Strain gauges are attached at appropriate locations on a transducer member and are connected to a voltage source and voltmeter to monitor changes in plant stem size. A microprocessor can be used to integrate the plant stem size information with other relevant environmental parameters and the data can be recorded on magnetic tape or used in other data processing equipment.

  15. Review of simulation techniques for Aquifer Thermal Energy Storage (ATES)

    NASA Astrophysics Data System (ADS)

    Mercer, J. W.; Faust, C. R.; Miller, W. J.; Pearson, F. J., Jr.

    1981-03-01

    The analysis of aquifer thermal energy storage (ATES) systems rely on the results from mathematical and geochemical models. Therefore, the state-of-the-art models relevant to ATES were reviewed and evaluated. These models describe important processes active in ATES including ground-water flow, heat transport (heat flow), solute transport (movement of contaminants), and geochemical reactions. In general, available models of the saturated ground-water environment are adequate to address most concerns associated with ATES; that is, design, operation, and environmental assessment. In those cases where models are not adequate, development should be preceded by efforts to identify significant physical phenomena and relate model parameters to measurable quantities.

  16. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  17. Caenorhabditis elegans - A model system for space biology studies

    NASA Technical Reports Server (NTRS)

    Johnson, Thomas E.; Nelson, Gregory A.

    1991-01-01

    The utility of the nematode Caenorhabditis elegans in studies spanning aspects of development, aging, and radiobiology is reviewed. These topics are interrelated via cellular and DNA repair processes especially in the context of oxidative stress and free-radical metabolism. The relevance of these research topics to problems in space biology is discussed and properties of the space environment are outlined. Exposure to the space-flight environment can induce rapid changes in living systems that are similar to changes occurring during aging; manipulation of these environmental parameters may represent an experimental strategy for studies of development and senescence. The current and future opportunities for such space-flight experimentation are presented.

  18. Lattice QCD Studies of Transverse Momentum-Dependent Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.

    2015-09-01

    Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped gauge link. Such a definition opens the possibility of evaluating TMDs within lattice QCD. By parametrizing the aforementioned matrix elements in terms of invariant amplitudes, the problem can be cast in a Lorentz frame suited for the lattice calculation. Results for selected TMD observables are presented, including a particular focus on their dependence on a Collins-Soper-type evolution parameter, which quantifies proximity of the staple-shaped gauge links to the light cone.

  19. Flavor Oscillations in the Supernova Hot Bubble Region: Nonlinear Effects of Neutrino Background

    NASA Astrophysics Data System (ADS)

    Pastor, Sergio; Raffelt, Georg

    2002-10-01

    The neutrino flux close to a supernova core contributes substantially to neutrino refraction so that flavor oscillations become a nonlinear phenomenon. One unexpected consequence is efficient flavor transformation for antineutrinos in a region where only neutrinos encounter a Mikheyev-Smirnov-Wolfenstein resonance or vice versa. Contrary to previous studies we find that in the neutrino-driven wind the electron fraction Ye always stays below 0.5, corresponding to a neutron-rich environment as required by r-process nucleosynthesis. The relevant range of masses and mixing angles includes the region indicated by LSND, but not the atmospheric or solar oscillation parameters.

  20. Flavor oscillations in the supernova hot bubble region: nonlinear effects of neutrino background.

    PubMed

    Pastor, Sergio; Raffelt, Georg

    2002-11-04

    The neutrino flux close to a supernova core contributes substantially to neutrino refraction so that flavor oscillations become a nonlinear phenomenon. One unexpected consequence is efficient flavor transformation for antineutrinos in a region where only neutrinos encounter a Mikheyev-Smirnov-Wolfenstein resonance or vice versa. Contrary to previous studies we find that in the neutrino-driven wind the electron fraction Y(e) always stays below 0.5, corresponding to a neutron-rich environment as required by r-process nucleosynthesis. The relevant range of masses and mixing angles includes the region indicated by LSND, but not the atmospheric or solar oscillation parameters.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmilehto, J.; Deppe, F.; Di Ventra, M.

    Memristors are resistive elements retaining information of their past dynamics. They have garnered substantial interest due to their potential for representing a paradigm change in electronics, information processing and unconventional computing. Given the advent of quantum technologies, a design for a quantum memristor with superconducting circuits may be envisaged. Along these lines, we introduce such a quantum device whose memristive behavior arises from quasiparticle-induced tunneling when supercurrents are cancelled. Here in this paper, for realistic parameters, we find that the relevant hysteretic behavior may be observed using current state-of-the-art measurements of the phase-driven tunneling current. Finally, we develop suitable methodsmore » to quantify memory retention in the system.« less

  2. Quantum Memristors with Superconducting Circuits

    PubMed Central

    Salmilehto, J.; Deppe, F.; Di Ventra, M.; Sanz, M.; Solano, E.

    2017-01-01

    Memristors are resistive elements retaining information of their past dynamics. They have garnered substantial interest due to their potential for representing a paradigm change in electronics, information processing and unconventional computing. Given the advent of quantum technologies, a design for a quantum memristor with superconducting circuits may be envisaged. Along these lines, we introduce such a quantum device whose memristive behavior arises from quasiparticle-induced tunneling when supercurrents are cancelled. For realistic parameters, we find that the relevant hysteretic behavior may be observed using current state-of-the-art measurements of the phase-driven tunneling current. Finally, we develop suitable methods to quantify memory retention in the system. PMID:28195193

  3. Quantum feedback cooling of a mechanical oscillator using variational measurements: tweaking Heisenberg’s microscope

    NASA Astrophysics Data System (ADS)

    Habibi, Hojat; Zeuthen, Emil; Ghanaatshoar, Majid; Hammerer, Klemens

    2016-08-01

    We revisit the problem of preparing a mechanical oscillator in the vicinity of its quantum-mechanical ground state by means of feedback cooling based on continuous optical detection of the oscillator position. In the parameter regime relevant to ground-state cooling, the optical back-action and imprecision noise set the bottleneck of achievable cooling and must be carefully balanced. This can be achieved by adapting the phase of the local oscillator in the homodyne detection realizing a so-called variational measurement. The trade-off between accurate position measurement and minimal disturbance can be understood in terms of Heisenberg’s microscope and becomes particularly relevant when the measurement and feedback processes happen to be fast within the quantum coherence time of the system to be cooled. This corresponds to the regime of large quantum cooperativity {C}{{q}}≳ 1, which was achieved in recent experiments on feedback cooling. Our method provides a simple path to further pushing the limits of current state-of-the-art experiments in quantum optomechanics.

  4. Multi-sensory integration in a small brain

    NASA Astrophysics Data System (ADS)

    Gepner, Ruben; Wolk, Jason; Gershow, Marc

    Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  5. Cloud forming potential of oligomers relevant to secondary organic aerosols

    NASA Astrophysics Data System (ADS)

    Xu, Wen; Guo, Song; Gomez-Hernandez, Mario; Zamora, Misti L.; Secrest, Jeremiah; Marrero-Ortiz, Wilmarie; Zhang, Annie L.; Collins, Don R.; Zhang, Renyi

    2014-09-01

    The hygroscopic growth factor (HGF) and cloud condensation nuclei (CCN) activity are measured for surrogates that mimic atmospherically relevant oligomers, including glyoxal trimer dihydrate, methyl glyoxal trimer dihydrate, sucrose, methyl glyoxal mixtures with sulfuric acid and glycolic acid, and 2,4-hexandienal mixtures with sulfuric acid and glycolic acid. For the single-component aerosols, the measured HGF ranges from 1.3 to 1.4 at a relative humidity of 90%, and the hygroscopicity parameter (κ) is in the range of 0.06 to 0.19 on the basis of the measured CCN activity and 0.13 to 0.22 on the basis of the measured HGF, compared to the calculated values of 0.08 to 0.16. Large differences exist in the κ values derived using the measured HGF and CCN data for the multi-component aerosols. Our results reveal that, in contrast to the oxidation process, oligomerization decreases particle hygroscopicity and CCN activity and provides guidance for analyzing the organic species in ambient aerosols.

  6. Environmental Factors Affecting Microbiota Dynamics during Traditional Solid-state Fermentation of Chinese Daqu Starter

    PubMed Central

    Li, Pan; Lin, Weifeng; Liu, Xiong; Wang, Xiaowen; Luo, Lixin

    2016-01-01

    In this study, we investigated the microbiota dynamics during two industrial-scale traditional solid-state fermentation (SSF) processes of Daqu starters. Similar evolution profiles of environmental parameters, enzymatic activities, microbial amounts, and communities were observed during the medium temperature SSF (MTSSF) and low temperature SSF (LTSSF) processes. Orders of Rickettsiales and Streptophyta only dominated the initial 2 days, and Eurotiales only predominated from days 10 to 24, however, phylotypes of Enterobacteriales, Lactobacillales, Bacillales, Saccharomycetales, and Mucorales both prevailed throughout the MTSSF and LTSSF processes. Nevertheless, the pH in MTSSF process on day 5 were 5.28, while in LTSSF process (4.87) significantly lower (P < 0.05). The glucoamylase activities in MTSSF process dropped from 902.71 to 394.33 mg glucose g-1 h-1 on days 5 to 24, while significantly lower (P < 0.05) in LTSSF process and decreased from 512.25 to 268.69 mg glucose g-1 h-1. The relative abundance of Enterobacteriales and Lactobacillales in MTSSF process constituted from 10.30 to 71.73% and 2.34 to 16.68%, while in LTSSF process ranged from 3.16 to 41.06% and 8.43 to 57.39%, respectively. The relative abundance of Eurotiales in MTSSF process on days 10 to 24 decreased from 36.10 to 28.63%, while obviously higher in LTSSF process and increased from 52.00 to 72.97%. Furthermore, lower bacterial richness but higher fungal richness were displayed, markedly differences in bacterial communities but highly similarities in fungal communities were exhibited, during MTSSF process comparatively to the LTSSF process. Canonical correspondence analysis revealed microbial structure transition happened at thermophilic stages under environmental stress of moisture, pH, acidity, and pile temperature. These profound understanding might help to effectively control the traditional Daqu SSF process by adjusting relevant environmental parameters. PMID:27540378

  7. Precision laser processing for micro electronics and fiber optic manufacturing

    NASA Astrophysics Data System (ADS)

    Webb, Andrew; Osborne, Mike; Foster-Turner, Gideon; Dinkel, Duane W.

    2008-02-01

    The application of laser based materials processing for precision micro scale manufacturing in the electronics and fiber optic industry is becoming increasingly widespread and accepted. This presentation will review latest laser technologies available and discuss the issues to be considered in choosing the most appropriate laser and processing parameters. High repetition rate, short duration pulsed lasers have improved rapidly in recent years in terms of both performance and reliability enabling flexible, cost effective processing of many material types including metal, silicon, plastic, ceramic and glass. Demonstrating the relevance of laser micromachining, application examples where laser processing is in use for production will be presented, including miniaturization of surface mount capacitors by applying a laser technique for demetalization of tracks in the capacitor manufacturing process and high quality laser machining of fiber optics including stripping, cleaving and lensing, resulting in optical quality finishes without the need for traditional polishing. Applications include telecoms, biomedical and sensing. OpTek Systems was formed in 2000 and provide fully integrated systems and sub contract services for laser processes. They are headquartered in the UK and are establishing a presence in North America through a laser processing facility in South Carolina and sales office in the North East.

  8. Synchronisation, acquisition and tracking for telemetry and data reception

    NASA Astrophysics Data System (ADS)

    Vandoninck, A.

    1992-06-01

    The important parameters of synchronization, acquisition, and tracking are addressed, and each function is highlighted separately. The following sequence is such as the functions occur in the system in time and for the type of data to be received, with distinction between telemetry and data reception, between direct carrier modulation or the use of a subcarrier, and between deep space and normal reception. For the telemetry reception the acquisition is described taking into account the difference in performances as geostationary or polar orbits, and the dependencies on the different Doppler offsets and rates are distinguished. The related functions and parameters are covered and the specifications of an average receiver are summarized. The synchronization of the valid data is described with a distinction for data directly modulated or via a subcarrier, the type of modulation and bitrate. The relevant functions and parameters of the average receiver/demodulator are summarized. The tracking of the signal in the course of the operational phase is described and relevant parameters of an actual system are presented. The reception of real data is handled and a sequence of acquisition, synchronization, and tracking is applied. Here higher bitrates and direct modulation schemes play an important role. The market equipment with the relevant parameters are discussed. The three functions in cases where deep reception is needed are covered. The high performance receiver/demodulator functions and how the acquisition, synchronization, and tracking is handled in such application, are explained.

  9. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  10. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing

    2017-07-12

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.

  11. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959

  12. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. Some relevant parameters for assessing fire hazards of combustible mine materials using laboratory scale experiments

    PubMed Central

    Litton, Charles D.; Perera, Inoka E.; Harteis, Samuel P.; Teacoach, Kara A.; DeRosa, Maria I.; Thomas, Richard A.; Smith, Alex C.

    2018-01-01

    When combustible materials ignite and burn, the potential for fire growth and flame spread represents an obvious hazard, but during these processes of ignition and flaming, other life hazards present themselves and should be included to ensure an effective overall analysis of the relevant fire hazards. In particular, the gases and smoke produced both during the smoldering stages of fires leading to ignition and during the advanced flaming stages of a developing fire serve to contaminate the surrounding atmosphere, potentially producing elevated levels of toxicity and high levels of smoke obscuration that render the environment untenable. In underground mines, these hazards may be exacerbated by the existing forced ventilation that can carry the gases and smoke to locations far-removed from the fire location. Clearly, materials that require high temperatures (above 1400 K) and that exhibit low mass loss during thermal decomposition, or that require high heat fluxes or heat transfer rates to ignite represent less of a hazard than materials that decompose at low temperatures or ignite at low levels of heat flux. In order to define and quantify some possible parameters that can be used to assess these hazards, small-scale laboratory experiments were conducted in a number of configurations to measure: 1) the toxic gases and smoke produced both during non-flaming and flaming combustion; 2) mass loss rates as a function of temperature to determine ease of thermal decomposition; and 3) mass loss rates and times to ignition as a function of incident heat flux. This paper describes the experiments that were conducted, their results, and the development of a set of parameters that could possibly be used to assess the overall fire hazard of combustible materials using small scale laboratory experiments. PMID:29599565

  14. Some relevant parameters for assessing fire hazards of combustible mine materials using laboratory scale experiments.

    PubMed

    Litton, Charles D; Perera, Inoka E; Harteis, Samuel P; Teacoach, Kara A; DeRosa, Maria I; Thomas, Richard A; Smith, Alex C

    2018-04-15

    When combustible materials ignite and burn, the potential for fire growth and flame spread represents an obvious hazard, but during these processes of ignition and flaming, other life hazards present themselves and should be included to ensure an effective overall analysis of the relevant fire hazards. In particular, the gases and smoke produced both during the smoldering stages of fires leading to ignition and during the advanced flaming stages of a developing fire serve to contaminate the surrounding atmosphere, potentially producing elevated levels of toxicity and high levels of smoke obscuration that render the environment untenable. In underground mines, these hazards may be exacerbated by the existing forced ventilation that can carry the gases and smoke to locations far-removed from the fire location. Clearly, materials that require high temperatures (above 1400 K) and that exhibit low mass loss during thermal decomposition, or that require high heat fluxes or heat transfer rates to ignite represent less of a hazard than materials that decompose at low temperatures or ignite at low levels of heat flux. In order to define and quantify some possible parameters that can be used to assess these hazards, small-scale laboratory experiments were conducted in a number of configurations to measure: 1) the toxic gases and smoke produced both during non-flaming and flaming combustion; 2) mass loss rates as a function of temperature to determine ease of thermal decomposition; and 3) mass loss rates and times to ignition as a function of incident heat flux. This paper describes the experiments that were conducted, their results, and the development of a set of parameters that could possibly be used to assess the overall fire hazard of combustible materials using small scale laboratory experiments.

  15. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community.

  16. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    1999-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program homepage i/e.: http://ospso.gsfc.nasa.gov/validation/valpage.html).

  17. The typological approach to submarine groundwater discharge (SGD)

    USGS Publications Warehouse

    Bokuniewicz, H.; Buddemeier, R.; Maxwell, B.; Smith, C.

    2003-01-01

    Coastal zone managers need to factor submarine groundwater discharge (SGD) in their integration. SGD provides a pathway for the transfer of freshwater, and its dissolved chemical burden, from the land to the coastal ocean. SGD reduces salinities and provides nutrients to specialized coastal habitats. It also can be a pollutant source, often undetected, causing eutrophication and triggering nuisance algal blooms. Despite its importance, SGD remains somewhat of a mystery in most places because it is usually unseen and difficult to measure. SGD has been directly measured at only about a hundred sites worldwide. A typology generated by the Land-Ocean Interaction in the Coastal Zone (LOICZ) Project is one of the few tools globally available to coastal resource managers for identifying areas in their jurisdiction where SGD may be a confounding process. (LOICZ is a core project of the International Geosphere/Biosphere Programme.) Of the hundreds of globally distributed parameters in the LOICZ typology, a SGD subset of potentially relevant parameters may be culled. A quantitative combination of the relevant hydrological parameters can serve as a proxy for the SGD conditions not directly measured. Web-LOICZ View, geospatial software then provides an automated approach to clustering these data into groups of locations that have similar characteristics. It permits selection of variables, of the number of clusters desired, and of the clustering criteria, and provides means of testing predictive results against independent variables. Information on the occurrence of a variety of SGD indicators can then be incorporated into regional clustering analysis. With such tools, coastal managers can focus attention on the most likely sites of SGD in their jurisdiction and design the necessary measurement and modeling programs needed for integrated management.

  18. Competing Uses of Underground Systems Related to Energy Supply: Applying Single- and Multiphase Simulations for Site Characterization and Risk-Analysis

    NASA Astrophysics Data System (ADS)

    Kissinger, A.; Walter, L.; Darcis, M.; Flemisch, B.; Class, H.

    2012-04-01

    Global climate change, shortage of resources and the resulting turn towards renewable sources of energy lead to a growing demand for the utilization of subsurface systems. Among these competing uses are Carbon Capture and Storage (CCS), geothermal energy, nuclear waste disposal, "renewable" methane or hydrogen storage as well as the ongoing production of fossil resources like oil, gas, and coal. Besides competing among themselves, these technologies may also create conflicts with essential public interests like water supply. For example, the injection of CO2 into the underground causes an increase in pressure reaching far beyond the actual radius of influence of the CO2 plume, potentially leading to large amounts of displaced salt water. Finding suitable sites is a demanding task for several reasons. Natural systems as opposed to technical systems are always characterized by heterogeneity. Therefore, parameter uncertainty impedes reliable predictions towards capacity and safety of a site. State of the art numerical simulations combined with stochastic approaches need to be used to obtain a more reliable assessment of the involved risks and the radii of influence of the different processes. These simulations may include the modeling of single- and multiphase non-isothermal flow, geo-chemical and geo-mechanical processes in order to describe all relevant physical processes adequately. Stochastic approaches have the aim to estimate a bandwidth of the key output parameters based on uncertain input parameters. Risks for these different underground uses can then be made comparable with each other. Along with the importance and the urgency of the competing processes this may lead to a more profound basis for a decision. Communicating risks to stake holders and a concerned public is crucial for the success of finding a suitable site for CCS (or other subsurface utilization). We present and discuss first steps towards an approach for addressing the issue of competitive utilization of the subsurface and the required process of communication between scientists, engineers, policy makers, and societies.

  19. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  20. Kinetics and thermodynamics of oxidation mediated reaction in L-cysteine and its methyl and ethyl esters in dimethyl sulfoxide-d6 by NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Dougherty, Ryan J.; Singh, Jaideep; Krishnan, V. V.

    2017-03-01

    L-Cysteine (L-Cys), L-Cysteine methyl ester (L-CysME) or L-Cysteine ethyl ester (L-CysEE), when dissolved in dimethyl sulfoxide, undergoes an oxidation process. This process is slow enough and leads to nuclear magnetic resonance (NMR) spectral changes that could be monitored in real time. The oxidation mediated transition is modeled as a pseudo-first order kinetics and the thermodynamic parameters are estimated using the Eyring's formulation. L-Cysteine and their esters are often used as biological models due to the remarkable thiol group that can be found in different oxidation states. This oxidation mediated transition is due to the combination of thiol oxidation to a disulfide followed by solvent-induced effects may be relevant in designing cysteine-based molecular models.

  1. Surface growth kinematics via local curve evolution.

    PubMed

    Moulton, Derek E; Goriely, Alain

    2014-01-01

    A mathematical framework is developed to model the kinematics of surface growth for objects that can be generated by evolving a curve in space, such as seashells and horns. Growth is dictated by a growth velocity vector field defined at every point on a generating curve. A local orthonormal basis is attached to each point of the generating curve and the velocity field is given in terms of the local coordinate directions, leading to a fully local and elegant mathematical structure. Several examples of increasing complexity are provided, and we demonstrate how biologically relevant structures such as logarithmic shells and horns emerge as analytical solutions of the kinematics equations with a small number of parameters that can be linked to the underlying growth process. Direct access to cell tracks and local orientation enables for connections to be made to the underlying growth process.

  2. Coagulation of dust particles in a plasma

    NASA Technical Reports Server (NTRS)

    Horanyi, M.; Goertz, C. K.

    1990-01-01

    The electrostatic charge of small dust grains in a plasma in which the temperature varies in time is discussed, pointing out that secondary electron emission might introduce charge separation. If the sign of the charge on small grains is opposite to that on big ones, enhanced coagulation can occur which will affect the size distribution of grains in a plasma. Two scenarios where this process might be relevant are considered: a hot plasma environment with temperature fluctuations and a cold plasma environment with transient heating events. The importance of the enhanced coagulation is uncertain, because the plasma parameters in grain-producing environments such as a molecular cloud or a protoplanetary disk are not known. It is possible, however, that this process is the most efficient mechanism for the growth of grains in the size range of 0.1-500 microns.

  3. Impact of high-intensity pulsed electric fields on bioactive compounds in Mediterranean plant-based foods.

    PubMed

    Elez-Martínez, Pedro; Soliva-Fortuny, Robert; Martín-Belloso, Olga

    2009-05-01

    Novel non-thermal processing technologies such as high-intensity pulsed electric field (HIPEF) treatments may be applied to pasteurize plant-based liquid foods as an alternative to conventional heat treatments. In recent years, there has been an increasing interest in HIPEF as a way of preserving and extending the shelf-life of liquid products without the quality damage caused by heat treatments. However, less attention has been paid to the effects of HIPEF on minor constituents of these products, namely bioactive compounds. This review is a state-of-the-art update on the effects of HIPEF treatments on health-related compounds in plants of the Mediterranean diet such as fruit juices, and Spanish gazpacho. The relevance of HIPEF-processing parameters on retaining plant-based bioactive compounds will be discussed.

  4. Analysis of high field effects on the steady-state current-voltage response of semi-insulating 4H-SiC for photoconductive switch applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiskumara, R.; Joshi, R. P., E-mail: ravi.joshi@ttu.edu; Mauch, D.

    A model-based analysis of the steady-state, current-voltage response of semi-insulating 4H-SiC is carried out to probe the internal mechanisms, focusing on electric field driven effects. Relevant physical processes, such as multiple defects, repulsive potential barriers to electron trapping, band-to-trap impact ionization, and field-dependent detrapping, are comprehensively included. Results of our model match the available experimental data fairly well over orders of magnitude variation in the current density. A number of important parameters are also extracted in the process through comparisons with available data. Finally, based on our analysis, the possible presence of holes in the samples can be discounted upmore » to applied fields as high as ∼275 kV/cm.« less

  5. Challenges in industrial fermentation technology research.

    PubMed

    Formenti, Luca Riccardo; Nørregaard, Anders; Bolic, Andrijana; Hernandez, Daniela Quintanilla; Hagemann, Timo; Heins, Anna-Lena; Larsson, Hilde; Mears, Lisa; Mauricio-Iglesias, Miguel; Krühne, Ulrich; Gernaey, Krist V

    2014-06-01

    Industrial fermentation processes are increasingly popular, and are considered an important technological asset for reducing our dependence on chemicals and products produced from fossil fuels. However, despite their increasing popularity, fermentation processes have not yet reached the same maturity as traditional chemical processes, particularly when it comes to using engineering tools such as mathematical models and optimization techniques. This perspective starts with a brief overview of these engineering tools. However, the main focus is on a description of some of the most important engineering challenges: scaling up and scaling down fermentation processes, the influence of morphology on broth rheology and mass transfer, and establishing novel sensors to measure and control insightful process parameters. The greatest emphasis is on the challenges posed by filamentous fungi, because of their wide applications as cell factories and therefore their relevance in a White Biotechnology context. Computational fluid dynamics (CFD) is introduced as a promising tool that can be used to support the scaling up and scaling down of bioreactors, and for studying mixing and the potential occurrence of gradients in a tank. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A neuromathematical model of human information processing and its application to science content acquisition

    NASA Astrophysics Data System (ADS)

    Anderson, O. Roger

    The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.

  7. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  8. Impact of biology knowledge on the conservation and management of large pelagic sharks.

    PubMed

    Yokoi, Hiroki; Ijima, Hirotaka; Ohshimo, Seiji; Yokawa, Kotaro

    2017-09-06

    Population growth rate, which depends on several biological parameters, is valuable information for the conservation and management of pelagic sharks, such as blue and shortfin mako sharks. However, reported biological parameters for estimating the population growth rates of these sharks differ by sex and display large variability. To estimate the appropriate population growth rate and clarify relationships between growth rate and relevant biological parameters, we developed a two-sex age-structured matrix population model and estimated the population growth rate using combinations of biological parameters. We addressed elasticity analysis and clarified the population growth rate sensitivity. For the blue shark, the estimated median population growth rate was 0.384 with a range of minimum and maximum values of 0.195-0.533, whereas those values of the shortfin mako shark were 0.102 and 0.007-0.318, respectively. The maturity age of male sharks had the largest impact for blue sharks, whereas that of female sharks had the largest impact for shortfin mako sharks. Hypotheses for the survival process of sharks also had a large impact on the population growth rate estimation. Both shark maturity age and survival rate were based on ageing validation data, indicating the importance of validating the quality of these data for the conservation and management of large pelagic sharks.

  9. Elucidating the Performance Limitations of Lithium-ion Batteries due to Species and Charge Transport through Five Characteristic Parameters

    PubMed Central

    Jiang, Fangming; Peng, Peng

    2016-01-01

    Underutilization due to performance limitations imposed by species and charge transports is one of the key issues that persist with various lithium-ion batteries. To elucidate the relevant mechanisms, two groups of characteristic parameters were proposed. The first group contains three characteristic time parameters, namely: (1) te, which characterizes the Li-ion transport rate in the electrolyte phase, (2) ts, characterizing the lithium diffusion rate in the solid active materials, and (3) tc, describing the local Li-ion depletion rate in electrolyte phase at the electrolyte/electrode interface due to electrochemical reactions. The second group contains two electric resistance parameters: Re and Rs, which represent respectively, the equivalent ionic transport resistance and the effective electronic transport resistance in the electrode. Electrochemical modeling and simulations to the discharge process of LiCoO2 cells reveal that: (1) if te, ts and tc are on the same order of magnitude, the species transports may not cause any performance limitations to the battery; (2) the underlying mechanisms of performance limitations due to thick electrode, high-rate operation, and large-sized active material particles as well as effects of charge transports are revealed. The findings may be used as quantitative guidelines in the development and design of more advanced Li-ion batteries. PMID:27599870

  10. Effects of smelting parameters on the slag/metal separation behaviors of Hongge vanadium-bearing titanomagnetite metallized pellets obtained from the gas-based direct reduction process

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Chu, Man-sheng; Tang, Jue; Liu, Zheng-gen

    2018-06-01

    Smelting separations of Hongge vanadium-bearing titanomagnetite metallized pellets (HVTMP) prepared by gas-based direct reduction were investigated, and the effects of smelting parameters on the slag/metal separation behaviors were analyzed. Relevant mechanisms were elucidated using X-ray diffraction analysis, FACTSAGE 7.0 calculations, and scanning electron microscopy observations. The results show that, when the smelting temperature, time, and C/O ratio are increased, the recoveries of V and Cr of HVTMP in pig iron are improved, the recovery of Fe initially increases and subsequently decreases, and the recovery of TiO2 in slag decreases. When the smelting CaO/SiO2 ratio is increased, the recoveries of Fe, V, and Cr in pig iron increase and the recovery of TiO2 in slag initially increases and subsequently decreases. The appropriate smelting separation parameters for HVTMP are as follows: smelting temperature of 1873 K; smelting time of 30-50 min; C/O ratio of 1.25; and CaO/SiO2 ratio of 0.50. With these optimized parameters (smelting time: 30 min), the recoveries of Fe, V, Cr, and TiO2 are 99.5%, 91.24%, 92.41%, and 94.86%, respectively.

  11. Characterization and optimization of cell seeding in scaffolds by factorial design: quality by design approach for skeletal tissue engineering.

    PubMed

    Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan

    2011-12-01

    Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.

  12. Semi-automation of Doppler Spectrum Image Analysis for Grading Aortic Valve Stenosis Severity.

    PubMed

    Niakšu, O; Balčiunaitė, G; Kizlaitis, R J; Treigys, P

    2016-01-01

    Doppler echocardiography analysis has become a golden standard in the modern diagnosis of heart diseases. In this paper, we propose a set of techniques for semi-automated parameter extraction for aortic valve stenosis severity grading. The main objectives of the study is to create echocardiography image processing techniques, which minimize manual image processing work of clinicians and leads to reduced human error rates. Aortic valve and left ventricle output tract spectrogram images have been processed and analyzed. A novel method was developed to trace systoles and to extract diagnostic relevant features. The results of the introduced method have been compared to the findings of the participating cardiologists. The experimental results showed the accuracy of the proposed method is comparable to the manual measurement performed by medical professionals. Linear regression analysis of the calculated parameters and the measurements manually obtained by the cardiologists resulted in the strongly correlated values: peak systolic velocity's and mean pressure gradient's R2 both equal to 0.99, their means' differences equal to 0.02 m/s and 4.09 mmHg, respectively, and aortic valve area's R2 of 0.89 with the two methods means' difference of 0.19 mm. The introduced Doppler echocardiography images processing method can be used as a computer-aided assistance in the aortic valve stenosis diagnostics. In our future work, we intend to improve precision of left ventricular outflow tract spectrogram measurements and apply data mining methods to propose a clinical decision support system for diagnosing aortic valve stenosis.

  13. Polyurethane foam loaded with SDS for the adsorption of cationic dyes from aqueous medium: Multivariate optimization of the loading process.

    PubMed

    Robaina, Nicolle F; Soriano, Silvio; Cassella, Ricardo J

    2009-08-15

    This paper reports the development of a new procedure for the adsorption of four cationic dyes (Rhodamine B, Methylene Blue, Crystal Violet and Malachite Green) from aqueous medium employing polyurethane foam (PUF) loaded with sodium dodecylsulfate (SDS) as solid phase. PUF loading process was based on the stirring of 200mg PUF cylinders with acidic solutions containing SDS. The conditions for loading were optimized by response surface methodology (RSM) using a Doehlert design with three variables that were SDS and HCl concentrations and stirring time. Results obtained in the optimization process showed that the stirring time is not a relevant parameter in the PUF loading, evidencing that the transport of SDS from solution to PUF surface is fast. On the other hand, both SDS and HCl concentrations were important parameters causing significant variation in the efficiency of the resulting solid phase for the removal of dyes from solution. At optimized conditions, SDS and HCl concentrations were 4.0 x 10(-4) and 0.90 mol L(-1), respectively. The influence of stirring time was evaluated by univariate methodology. A 20 min stirring time was established in order to make the PUF loading process fast and robust without losing efficiency. The procedure was tested for the removal of the four cationic dyes from aqueous solutions and removal efficiencies always better than 90% were achieved for the two concentrations tested (2.0 x 10(-5) and 1.0 x 10(-4)mol L(-1)).

  14. Wave data processing toolbox manual

    USGS Publications Warehouse

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.

  15. Nano/microvehicles for efficient delivery and (bio)sensing at the cellular level

    PubMed Central

    Esteban-Fernández de Ávila, B.; Yáñez-Sedeño, P.

    2017-01-01

    A perspective review of recent strategies involving the use of nano/microvehicles to address the key challenges associated with delivery and (bio)sensing at the cellular level is presented. The main types and characteristics of the different nano/microvehicles used for these cellular applications are discussed, including fabrication pathways, propulsion (catalytic, magnetic, acoustic or biological) and navigation strategies, and relevant parameters affecting their propulsion performance and sensing and delivery capabilities. Thereafter, selected applications are critically discussed. An emphasis is made on enhancing the extra- and intra-cellular biosensing capabilities, fast cell internalization, rapid inter- or intra-cellular movement, efficient payload delivery and targeted on-demand controlled release in order to greatly improve the monitoring and modulation of cellular processes. A critical discussion of selected breakthrough applications illustrates how these smart multifunctional nano/microdevices operate as nano/microcarriers and sensors at the intra- and extra-cellular levels. These advances allow both the real-time biosensing of relevant targets and processes even at a single cell level, and the delivery of different cargoes (drugs, functional proteins, oligonucleotides and cells) for therapeutics, gene silencing/transfection and assisted fertilization, while overcoming challenges faced by current affinity biosensors and delivery vehicles. Key challenges for the future and the envisioned opportunities and future perspectives of this remarkably exciting field are discussed. PMID:29147499

  16. Task-oriented display design - Concept and example

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1989-01-01

    The general topic was in the area of display design alternatives for improved man-machine performance. The intent was to define and assess a display design concept oriented toward providing this task-oriented information. The major focus of this concept deals with the processing of data into parameters that are more relevant to the task of the human operator. Closely coupled to this concept of relevant information is the form or manner in which this information is actually presented. Conventional forms of presentation are normally a direct representation of the underlying data. By providing information in a form that is more easily assimilated and understood, a reduction in human error and cognitive workload may be obtained. A description of this proposed concept with a design example is provided. The application for the example was an engine display for a generic, twin-engine civil transport aircraft. The product of this concept was evaluated against a functionally similar, traditional display. The results of this evaluation showed that a task-oriented approach to design is a viable concept with regard to reducing user error and cognitive workload. The goal of this design process, providing task-oriented information to the user, both in content and form, appears to be a feasible mechanism for increasing the overall performance of a man-machine system.

  17. AtomPy: an open atomic-data curation environment

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel; Mendoza, Claudio; Boswell, Josiah S; Ajoku, Chukwuemeka

    2014-06-01

    We present a cloud-computing environment for atomic data curation, networking among atomic data providers and users, teaching-and-learning, and interfacing with spectral modeling software. The system is based on Google-Drive Sheets, Pandas (Python Data Analysis Library) DataFrames, and IPython Notebooks for open community-driven curation of atomic data for scientific and technological applications. The atomic model for each ionic species is contained in a multi-sheet Google-Drive workbook, where the atomic parameters from all known public sources are progressively stored. Metadata (provenance, community discussion, etc.) accompanying every entry in the database are stored through Notebooks. Education tools on the physics of atomic processes as well as their relevance to plasma and spectral modeling are based on IPython Notebooks that integrate written material, images, videos, and active computer-tool workflows. Data processing workflows and collaborative software developments are encouraged and managed through the GitHub social network. Relevant issues this platform intends to address are: (i) data quality by allowing open access to both data producers and users in order to attain completeness, accuracy, consistency, provenance and currentness; (ii) comparisons of different datasets to facilitate accuracy assessment; (iii) downloading to local data structures (i.e. Pandas DataFrames) for further manipulation and analysis by prospective users; and (iv) data preservation by avoiding the discard of outdated sets.

  18. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  19. Task-oriented display design: Concept and example

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1989-01-01

    The general topic was in the area of display design alternatives for improved man-machine performance. The intent was to define and assess a display design concept oriented toward providing this task-oriented information. The major focus of this concept deals with the processing of data into parameters that are more relevant to the task of the human operator. Closely coupled to this concept of relevant information is the form or manner in which this information is actually presented. Conventional forms of presentation are normally a direct representation of the underlying data. By providing information in a form that is more easily assimilated and understood, a reduction in human error and cognitive workload may be obtained. A description of this proposed concept with a design example is provided. The application for the example was an engine display for a generic, twin-engine civil transport aircraft. The product of this concept was evaluated against a functionally similar, traditional display. The results of this evaluation showed that a task-oriented approach to design is a viable concept with regard to reducing user error and cognitive workload. The goal of this design process, providing task-oriented information to the user, both in content and form, appears to be a feasible mechanism for increasing the overall performance of a man-machine system.

  20. Metalloids, soil chemistry and the environment.

    PubMed

    Lombi, Enzo; Holm, Peter E

    2010-01-01

    This chapter reviews physical chemical properties, origin and use ofmetalloids and their relevance in the environment. The elements boron (B), silicon (Si), germanium (Ge), arsenic (As), antimony (Sb), tellurium (Te), polonium (Po) and astatine (At) are considered metalloids. Metalloids conduct heat and electricity intermediate between nonmetals and metals and they generally form oxides. The natural abundance ofmetalloids varies from Si being the second most common element in the Earth's crust to At as the rarest of natural elements on Earth. The metalloid elements Ge, Te, Po and At are normally present in trace or ultratrace levels in the environment and as such are not considered of relevance in terms of environmental health. The environmental geochemical processes, factors and parameters controlling the partitioning and the speciation of B, Si, As and Sb are reviewed in relation to the bioavailability of these metalloids. Approaches based on the hypothesis that metal toxicity is related to both the metal-ligand complexation processes and the metal interactions with competing cations at the cell surface (biotic ligand) have so far not been successful for assessing metalloid bioavailability. The chapter concludes that our understanding of metalloids toxicity will improve in the future if, in addition to the points discussed above, surface membrane potentials are considered. This should represent a robust approach to the prediction of metalloid toxicity.

  1. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  2. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    PubMed

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Influence of surface coverage on the chemical desorption process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minissale, M.; Dulieu, F., E-mail: francois.dulieu@obspm.fr

    2014-07-07

    In cold astrophysical environments, some molecules are observed in the gas phase whereas they should have been depleted, frozen on dust grains. In order to solve this problem, astrochemists have proposed that a fraction of molecules synthesized on the surface of dust grains could desorb just after their formation. Recently the chemical desorption process has been demonstrated experimentally, but the key parameters at play have not yet been fully understood. In this article, we propose a new procedure to analyze the ratio of di-oxygen and ozone synthesized after O atoms adsorption on oxidized graphite. We demonstrate that the chemical desorptionmore » efficiency of the two reaction paths (O+O and O+O{sub 2}) is different by one order of magnitude. We show the importance of the surface coverage: for the O+O reaction, the chemical desorption efficiency is close to 80% at zero coverage and tends to zero at one monolayer coverage. The coverage dependence of O+O chemical desorption is proved by varying the amount of pre-adsorbed N{sub 2} on the substrate from 0 to 1.5 ML. Finally, we discuss the relevance of the different physical parameters that could play a role in the chemical desorption process: binding energy, enthalpy of formation, and energy transfer from the new molecule to the surface or to other adsorbates.« less

  4. Investigating low flow process controls, through complex modelling, in a UK chalk catchment

    NASA Astrophysics Data System (ADS)

    Lubega Musuuza, Jude; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Woods, Ross; Howden, Nicholas

    2017-04-01

    The typical streamflow response of Chalk catchments is dominated by groundwater contributions due the high degree of groundwater recharge through preferential flow pathways. The groundwater store attenuates the precipitation signal, which causes a delay between the corresponding high and low extremes in the precipitation and the stream flow signals. Streamflow responses can therefore be quite out of phase with the precipitation input to a Chalk catchment. Therefore characterising such catchment systems, including modelling approaches, clearly need to reproduce these percolation and groundwater dominated pathways to capture these dominant flow pathways. The simulation of low flow conditions for chalk catchments in numerical models is especially difficult due to the complex interactions between various processes that may not be adequately represented or resolved in the models. Periods of low stream flows are particularly important due to competing water uses in the summer, including agriculture and water supply. In this study we apply and evaluate the physically-based Pennstate Integrated Hydrologic Model (PIHM) to the River Kennet, a sub-catchment of the Thames Basin, to demonstrate how the simulations of a chalk catchment are improved by a physically-based system representation. We also use an ensemble of simulations to investigate the sensitivity of various hydrologic signatures (relevant to low flows and droughts) to the different parameters in the model, thereby inferring the levels of control exerted by the processes that the parameters represent.

  5. Method for acquiring, storing and analyzing crystal images

    NASA Technical Reports Server (NTRS)

    Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)

    2003-01-01

    A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.

  6. Data integration for inference about spatial processes: A model-based approach to test and account for data inconsistency

    PubMed Central

    Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio

    2017-01-01

    Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034

  7. Geocenter variations derived from a combined processing of LEO- and ground-based GPS observations

    NASA Astrophysics Data System (ADS)

    Männel, Benjamin; Rothacher, Markus

    2017-08-01

    GNSS observations provided by the global tracking network of the International GNSS Service (IGS, Dow et al. in J Geod 83(3):191-198, 2009) play an important role in the realization of a unique terrestrial reference frame that is accurate enough to allow a detailed monitoring of the Earth's system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board low earth orbiters (LEOs) is a promising way to further improve the realization of the terrestrial reference frame and the estimation of geocenter coordinates, GPS satellite orbits and Earth rotation parameters. To assess the scope of the improvement on the geocenter coordinates, we processed a network of 53 globally distributed and stable IGS stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of 3 years (2010-2012). To ensure fully consistent solutions, the zero-difference phase observations of the ground stations and LEOs were processed in a common least-squares adjustment, estimating all the relevant parameters such as GPS and LEO orbits, station coordinates, Earth rotation parameters and geocenter motion. We present the significant impact of the individual LEO and a combination of all four LEOs on the geocenter coordinates. The formal errors are reduced by around 20% due to the inclusion of one LEO into the ground-only solution, while in a solution with four LEOs LEO-specific characteristics are significantly reduced. We compare the derived geocenter coordinates w.r.t. LAGEOS results and external solutions based on GPS and SLR data. We found good agreement in the amplitudes of all components; however, the phases in x- and z-direction do not agree well.

  8. Quantitative interpretation of the magnetic susceptibility frequency dependence

    NASA Astrophysics Data System (ADS)

    Ustra, Andrea; Mendonça, Carlos A.; Leite, Aruã; Jovane, Luigi; Trindade, Ricardo I. F.

    2018-05-01

    Low-field mass-specific magnetic susceptibility (MS) measurements using multifrequency alternating fields are commonly used to evaluate concentration of ferrimagnetic particles in the transition of superparamagnetic (SP) to stable single domain (SSD). In classical palaeomagnetic analyses, this measurement serves as a preliminary assessment of rock samples providing rapid, non-destructive, economical and easy information of magnetic properties. The SP-SSD transition is relevant in environmental studies because it has been associated with several geological and biogeochemical processes affecting magnetic mineralogy. MS is a complex function of mineral-type and grain-size distribution, as well as measuring parameters such as external field magnitude and frequency. In this work, we propose a new technique to obtain quantitative information on grain-size variations of magnetic particles in the SP-SSD transition by inverting frequency-dependent susceptibility. We introduce a descriptive parameter named as `limiting frequency effect' that provides an accurate estimation of MS loss with frequency. Numerical simulations show the methodology capability in providing data fitting and model parameters in many practical situations. Real-data applications with magnetite nanoparticles and core samples from sediments of Poggio le Guaine section of Umbria-Marche Basin (Italy) provide additional information not clearly recognized when interpreting cruder MS data. Caution is needed when interpreting frequency dependence in terms of single relaxation processes, which are not universally applicable and depend upon the nature of magnetic mineral in the material. Nevertheless, the proposed technique is a promising tool for SP-SSD content analyses.

  9. Polishing of silicon based advanced ceramics

    NASA Astrophysics Data System (ADS)

    Klocke, Fritz; Dambon, Olaf; Zunke, Richard; Waechter, D.

    2009-05-01

    Silicon based advanced ceramics show advantages in comparison to other materials due to their extreme hardness, wear and creep resistance, low density and low coefficient of thermal expansion. As a matter of course, machining requires high efforts. In order to reach demanded low roughness for optical or tribological applications a defect free surface is indispensable. In this paper, polishing of silicon nitride and silicon carbide is investigated. The objective is to elaborate scientific understanding of the process interactions. Based on this knowledge, the optimization of removal rate, surface quality and form accuracy can be realized. For this purpose, fundamental investigations of polishing silicon based ceramics are undertaken and evaluated. Former scientific publications discuss removal mechanisms and wear behavior, but the scientific insight is mainly based on investigations in grinding and lapping. The removal mechanisms in polishing are not fully understood due to complexity of interactions. The role of, e.g., process parameters, slurry and abrasives, and their influence on the output parameters is still uncertain. Extensive technological investigations demonstrate the influence of the polishing system and the machining parameters on the stability and the reproducibility. It is shown that the interactions between the advanced ceramics and the polishing systems is of great relevance. Depending on the kind of slurry and polishing agent the material removal mechanisms differ. The observed effects can be explained by dominating mechanical or chemo-mechanical removal mechanisms. Therefore, hypotheses to state adequate explanations are presented and validated by advanced metrology devices, such as SEM, AFM and TEM.

  10. Landslide model performance in a high resolution small-scale landscape

    NASA Astrophysics Data System (ADS)

    De Sy, V.; Schoorl, J. M.; Keesstra, S. D.; Jones, K. E.; Claessens, L.

    2013-05-01

    The frequency and severity of shallow landslides in New Zealand threatens life and property, both on- and off-site. The physically-based shallow landslide model LAPSUS-LS is tested for its performance in simulating shallow landslide locations induced by a high intensity rain event in a small-scale landscape. Furthermore, the effect of high resolution digital elevation models on the performance was tested. The performance of the model was optimised by calibrating different parameter values. A satisfactory result was achieved with a high resolution (1 m) DEM. Landslides, however, were generally predicted lower on the slope than mapped erosion scars. This discrepancy could be due to i) inaccuracies in the DEM or in other model input data such as soil strength properties; ii) relevant processes for this environmental context that are not included in the model; or iii) the limited validity of the infinite length assumption in the infinite slope stability model embedded in the LAPSUS-LS. The trade-off between a correct prediction of landslides versus stable cells becomes increasingly worse with coarser resolutions; and model performance decreases mainly due to altering slope characteristics. The optimal parameter combinations differ per resolution. In this environmental context the 1 m resolution topography resembles actual topography most closely and landslide locations are better distinguished from stable areas than for coarser resolutions. More gain in model performance could be achieved by adding landslide process complexities and parameter heterogeneity of the catchment.

  11. Understanding the sorption and biotransformation of organic micropollutants in innovative biological wastewater treatment technologies.

    PubMed

    Alvarino, T; Suarez, S; Lema, J; Omil, F

    2018-02-15

    New technologies for wastewater treatment have been developed in the last years based on the combination of biological reactors operating under different redox conditions. Their efficiency in the removal of organic micropollutants (OMPs) has not been clearly assessed yet. This review paper is focussed on understanding the sorption and biotransformation of a selected group of 17 OMPs, including pharmaceuticals, hormones and personal care products, during biological wastewater treatment processes. Apart from considering the role of "classical" operational parameters, new factors such as biomass conformation and particle size, upward velocity applied or the addition of adsorbents have been considered. It has been found that the OMP removal by sorption not only depends on their physico-chemical characteristics and other parameters, such as the biomass conformation and particle size, or some operational conditions also relevant. Membrane biological reactors (MBR), have shown to enhance sorption and biotransformation of some OMPs. The same applies to technologies bases on direct addition of activated carbon in bioreactors. The OMP biotransformation degree and pathway is mainly driven by the redox potential and the primary substrate activity. The combination of different redox potentials in hybrid reactor systems can significantly enhance the overall OMP removal efficiency. Sorption and biotransformation can be synergistically promoted in biological reactors by the addition of activated carbon. The deeper knowledge of the main parameters influencing OMP removal provided by this review will allow optimizing the biological processes in the future. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Testing the idea of privileged awareness of self-relevant information.

    PubMed

    Stein, Timo; Siebold, Alisha; van Zoest, Wieske

    2016-03-01

    Self-relevant information is prioritized in processing. Some have suggested the mechanism driving this advantage is akin to the automatic prioritization of physically salient stimuli in information processing (Humphreys & Sui, 2015). Here we investigate whether self-relevant information is prioritized for awareness under continuous flash suppression (CFS), as has been found for physical salience. Gabor patches with different orientations were first associated with the labels You or Other. Participants were more accurate in matching the self-relevant association, replicating previous findings of self-prioritization. However, breakthrough into awareness from CFS did not differ between self- and other-associated Gabors. These findings demonstrate that self-relevant information has no privileged access to awareness. Rather than modulating the initial visual processes that precede and lead to awareness, the advantage of self-relevant information may better be characterized as prioritization at later processing stages. (c) 2016 APA, all rights reserved).

  13. Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hock, Kiel; Earle, Keith

    2016-02-06

    In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.

  14. Model for amorphous aggregation processes

    NASA Astrophysics Data System (ADS)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  15. A study of forced convection boiling under reduced gravity

    NASA Technical Reports Server (NTRS)

    Merte, Herman, Jr.

    1992-01-01

    This report presents the results of activities conducted over the period 1/2/85-12/31/90, in which the study of forced convection boiling under reduced gravity was initiated. The study seeks to improve the understanding of the basic processes that constitute forced convection boiling by removing the buoyancy effects which may mask other phenomena. Specific objectives may also be expressed in terms of the following questions: (1) what effects, if any, will the removal of body forces to the lowest possible levels have on the forced convection boiling heat transfer processes in well-defined and meaningful circumstances? (this includes those effects and processes associated with the nucleation or onset of boiling during the transient increase in heater surface temperature, as well as the heat transfer and vapor bubble behaviors with established or steady-state conditions); and (2) if such effects are present, what are the boundaries of the relevant parameters such as heat flux, heater surface superheat, fluid velocity, bulk subcooling, and geometric/orientation relationships within which such effects will be produced?

  16. Analysis of unsteady wave processes in a rotating channel

    NASA Technical Reports Server (NTRS)

    Larosiliere, L. M.; Mawid, M.

    1993-01-01

    The impact of passage rotation on the gas dynamic wave processes is analyzed through a numerical simulation of ideal shock-tube flow in a closed rotating-channel. Initial conditions are prescribed by assuming homentropic solid-body rotation. Relevant parameters of the problem such as wheel Mach number, hub-to-tip radius ratio, length-to-tip radius ratio, diaphragm temperature ratio, and diaphragm pressure ratio are varied. The results suggest possible criteria for assessing the consequences of passage rotation on the wave processes, and they may therefore be applicable to pressure-exchange wave rotors. It is shown that for a fixed geometry and initial conditions, the contact interface acquires a distorted three-dimensional time-dependent orientation at non-zero wheel Mach numbers. At a fixed wheel Mach number, the level of distortion depends primarily on the density ratio across the interface as well as the hub-to-tip radius ratio. Rarefaction fronts, shocks, and contact interfaces are observed to propagate faster with increasing wheel Mach number.

  17. Evaluation of an interview process for admission into a school of pharmacy.

    PubMed

    Kelsch, Michael P; Friesner, Daniel L

    2012-03-12

    To evaluate the doctor of pharmacy (PharmD) admissions interview process at North Dakota State University (NDSU). Faculty pairs interviewed candidates using a standardized grading rubric to evaluate qualitative parameters or attributes such as ethics, relevant life and work experience, emotional maturity, commitment to patient care, leadership, and understanding of the pharmacy profession. Total interview scores, individual attribute domain scores, and the consistency and reliability of the interviewers were assessed. The total mean interview score for the candidate pool was 17.4 of 25 points. Mean scores for individual domains ranged from 2.3 to 3.0 on a Likert-scale of 0-4. Nine of the 11 faculty pairs showed no mean differences from their interview partner in total interview scores given. Evaluations by 8 of the 11 faculty pairs produced high interrater reliability. The current interview process is generally consistent and reliable; however, future improvements such as additional interviewer training and adoption of a multiple mini-interview format could be made.

  18. [Hygiene requirements for the level of intellectual intensivity loads in foreign language learning].

    PubMed

    Ivashchenko, S N

    2013-01-01

    The material of this article provides information about the basics of hygiene conditions and nature of intellectual loads of students of secondary schools in the perception of information in a foreign language. Are the most favorable conditions for the successful training of perception and assimilation of information supplied in the course of the learning process in one foreign language or some more different ones? It was found that the process of perception and assimilation of educational information in foreign languages is associated with some degree of mental and emotional stress of students. At the same time, the effectiveness of the learning process depends on the degree of stress. Certain parameters of the psychological and emotional stress students usually have a stimulating effect on their central nervous system. Another level, the psychological and emotional stress of students on the contrary, causes a braking effect of functional activity of the relevant structures of the central nervous system of students and reduces the effectiveness of training.

  19. Investigations of biological processes in Austrian MBT plants.

    PubMed

    Tintner, J; Smidt, E; Böhm, K; Binner, E

    2010-10-01

    Mechanical biological treatment (MBT) of municipal solid waste (MSW) has become an important technology in waste management during the last decade. The paper compiles investigations of mechanical biological processes in Austrian MBT plants. Samples from all plants representing different stages of degradation were included in this study. The range of the relevant parameters characterizing the materials and their behavior, e.g. total organic carbon, total nitrogen, respiration activity and gas generation sum, was determined. The evolution of total carbon and nitrogen containing compounds was compared and related to process operation. The respiration activity decreases in most of the plants by about 90% of the initial values whereas the ammonium release is still ongoing at the end of the biological treatment. If the biogenic waste fraction is not separated, it favors humification in MBT materials that is not observed to such extent in MSW. The amount of organic carbon is about 15% dry matter at the end of the biological treatment. (c) 2010 Elsevier Ltd. All rights reserved.

  20. Evaluation of an Interview Process for Admission Into a School of Pharmacy

    PubMed Central

    Friesner, Daniel L.

    2012-01-01

    Objective. To evaluate the doctor of pharmacy (PharmD) admissions interview process at North Dakota State University (NDSU). Methods. Faculty pairs interviewed candidates using a standardized grading rubric to evaluate qualitative parameters or attributes such as ethics, relevant life and work experience, emotional maturity, commitment to patient care, leadership, and understanding of the pharmacy profession. Total interview scores, individual attribute domain scores, and the consistency and reliability of the interviewers were assessed. Results. The total mean interview score for the candidate pool was 17.4 of 25 points. Mean scores for individual domains ranged from 2.3 to 3.0 on a Likert-scale of 0-4. Nine of the 11 faculty pairs showed no mean differences from their interview partner in total interview scores given. Evaluations by 8 of the 11 faculty pairs produced high interrater reliability. Conclusions. The current interview process is generally consistent and reliable; however, future improvements such as additional interviewer training and adoption of a multiple mini-interview format could be made. PMID:22438594

  1. Timing of target discrimination in human frontal eye fields.

    PubMed

    O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2004-01-01

    Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.

  2. Analysis of unsteady wave processes in a rotating channel

    NASA Astrophysics Data System (ADS)

    Larosiliere, Louis M.; Mawid, M.

    1993-06-01

    The impact of passage rotation on the gas dynamic wave processes is analyzed through a numerical simulation of ideal shock-tube flow in a closed rotating-channel. Initial conditions are prescribed by assuming homentropic solid-body rotation. Relevant parameters of the problem such as wheel Mach number, hub-to-tip radius ratio, length-to-tip radius ratio, diaphragm temperature ratio, and diaphragm pressure ratio are varied. The results suggest possible criteria for assessing the consequences of passage rotation on the wave processes, and they may therefore be applicable to pressure-exchange wave rotors. It is shown that for a fixed geometry and initial conditions, the contact interface acquires a distorted three-dimensional time-dependent orientation at non-zero wheel Mach numbers. At a fixed wheel Mach number, the level of distortion depends primarily on the density ratio across the interface as well as the hub-to-tip radius ratio. Rarefaction fronts, shocks, and contact interfaces are observed to propagate faster with increasing wheel Mach number.

  3. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  4. Empirical evaluation of cross-site reproducibility in radiomic features for characterizing prostate MRI

    NASA Astrophysics Data System (ADS)

    Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish

    2018-02-01

    The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.

  5. Five easy equations for patient flow through an emergency department.

    PubMed

    Madsen, Thomas Lill; Kofoed-Enevoldsen, Allan

    2011-10-01

    Queue models are effective tools for framing management decisions and Danish hospitals could benefit from awareness of such models. Currently, as emergency departments (ED) are under reorganization, we deem it timely to empirically investigate the applicability of the standard "M/M/1" queue model in order to document its relevance. We compared actual versus theoretical distributions of hourly patient flow from 27,000 patient cases seen at Frederiksberg Hospital's ED. Formulating equations for arrivals and capacity, we wrote and tested a five equation simulation model. The Poisson distribution fitted arrivals with an hour-of-the-day specific parameter. Treatment times exceeding 15 minutes were well-described by an exponential distribution. The ED can be modelled as a black box with an hourly capacity that can be estimated either as admissions per hour when the ED operates full hilt Poisson distribution or from the linear dependency of waiting times on queue number. The results show that our ED capacity is surprisingly constant despite variations in staffing. These findings led to the formulation of a model giving a compact framework for assessing the behaviour of the ED under different assumptions about opening hours, capacity and workload. The M/M/1 almost perfectly fits our. Thus modeling and simulations have contributed to the management process. not relevant. not relevant.

  6. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  7. A review of induction and attachment times of wetting thin films between air bubbles and particles and its relevance in the separation of particles by flotation.

    PubMed

    Albijanic, Boris; Ozdemir, Orhan; Nguyen, Anh V; Bradshaw, Dee

    2010-08-11

    Bubble-particle attachment in water is critical to the separation of particles by flotation which is widely used in the recovery of valuable minerals, the deinking of wastepaper, the water treatment and the oil recovery from tar sands. It involves the thinning and rupture of wetting thin films, and the expansion and relaxation of the gas-liquid-solid contact lines. The time scale of the first two processes is referred to as the induction time, whereas the time scale of the attachment involving all the processes is called the attachment time. This paper reviews the experimental studies into the induction and attachment times between minerals and air bubbles, and between oil droplets and air bubbles. It also focuses on the experimental investigations and mathematical modelling of elementary processes of the wetting film thinning and rupture, and the three-phase contact line expansion relevant to flotation. It was confirmed that the time parameters, obtained by various authors, are sensitive enough to show changes in both flotation surface chemistry and physical properties of solid surfaces of pure minerals. These findings should be extended to other systems. It is proposed that measurements of the bubble-particle attachment can be used to interpret changes in flotation behaviour or, in conjunction with other factors, such as particle size and gas dispersion, to predict flotation performance. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Aerosol physicochemical properties in relation to meteorology: Case studies in urban, marine, and arid settings

    NASA Astrophysics Data System (ADS)

    Wonaschuetz, Anna

    Atmospheric aerosols are a highly relevant component of the climate system affecting atmospheric radiative transfer and the hydrological cycle. As opposed to other key atmospheric constituents with climatic relevance, atmospheric aerosol particles are highly heterogeneous in time and space with respect to their size, concentration, chemical composition and physical properties. Many aspects of their life cycle are not understood, making them difficult to represent in climate models and hard to control as a pollutant. Aerosol-cloud interactions in particular are infamous as a major source of uncertainty in future climate predictions. Field measurements are an important source of information for the modeling community and can lead to a better understanding of chemical and microphysical processes. In this study, field data from urban, marine, and arid settings are analyzed and the impact of meteorological conditions on the evolution of aerosol particles while in the atmosphere is investigated. Particular attention is given to organic aerosols, which are a poorly understood component of atmospheric aerosols. Local wind characteristics, solar radiation, relative humidity and the presence or absence of clouds and fog are found to be crucial factors in the transport and chemical evolution of aerosol particles. Organic aerosols in particular are found to be heavily impacted by processes in the liquid phase (cloud droplets and aerosol water). The reported measurements serve to improve the process-level understanding of aerosol evolution in different environments and to inform the modeling community by providing realistic values for input parameters and validation of model calculations.

  9. Karlsruhe Database for Radioactive Wastes (KADABRA) - Accounting and Management System for Radioactive Waste Treatment - 12275

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himmerkus, Felix; Rittmeyer, Cornelia

    2012-07-01

    The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interimmore » products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)« less

  10. Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.

    PubMed

    Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa

    2017-03-01

    Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.

  11. Identification of Absorption, Distribution, Metabolism, and Excretion (ADME) Genes Relevant to Steatosis Using a Differential Gene Expression Approach

    EPA Science Inventory

    Absorption, distribution, metabolism, and excretion (ADME) parameters represent important connections between exposure to chemicals and the activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. ADME parameters u...

  12. Using HEC-HMS: Application to Karkheh river basin

    USDA-ARS?s Scientific Manuscript database

    This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...

  13. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

    PubMed

    Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2010-11-08

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.

  14. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients

    PubMed Central

    Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2010-01-01

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556

  15. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  16. Experimental investigation of dynamic impact of firearm with suppressor

    NASA Astrophysics Data System (ADS)

    Kilikevicius, Arturas; Skeivalas, Jonas; Jurevicius, Mindaugas; Turla, Vytautas; Kilikeviciene, Kristina; Bureika, Gintautas; Jakstas, Arunas

    2017-09-01

    The internal ballistics processes occur in the tube during firearm firing. They cause tremendous vibratory shock forces and robust sounds. The determination of these dynamic parameters is relevant in order to reasonably estimate the firearm ergonomic and noise reduction features. The objective of this study is to improve the reliability of the results of measuring a firearm suppressor's dynamic parameters. The analysis of indicator stability is based on an assessment of dynamic parameters and setting the correlation during experimental research. An examination of the spread of intensity of firearm with suppressor dynamic vibration and an analysis of its signals upon applying the theory of covariance functions are carried out in this paper. The results of measuring the intensity of vibrations in fixed points of a firearm and a shooter have been recorded on a time scale in the form of data arrays (matrices). The estimates of covariance functions between the arrays of digital results in measuring the intensity of firearm vibrations and the estimates of covariance functions of single arrays have been calculated upon changing the quantization interval on the time scale. Software Matlab 7 has been applied in the calculation. Finally, basic conclusions are given.

  17. Relativistic MHD simulations of collision-induced magnetic dissipation in poynting-flux-dominated jets/outflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Wei; Li, Hui; Zhang, Bing

    We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less

  18. Relativistic MHD simulations of collision-induced magnetic dissipation in poynting-flux-dominated jets/outflows

    DOE PAGES

    Deng, Wei; Li, Hui; Zhang, Bing; ...

    2015-05-29

    We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less

  19. Experimental modal analysis on fresh-frozen human hemipelvic bones employing a 3D laser vibrometer for the purpose of modal parameter identification.

    PubMed

    Neugebauer, R; Werner, M; Voigt, C; Steinke, H; Scholz, R; Scherer, S; Quickert, M

    2011-05-17

    To provide a close-to-reality simulation model, such as for improved surgery planning, this model has to be experimentally verified. The present article describes the use of a 3D laser vibrometer for determining modal parameters of human pelvic bones that can be used for verifying a finite elements model. Compared to previously used sensors, such as acceleration sensors or strain gauges, the laser vibrometric procedure used here is a non-contact and non-interacting measuring method that allows a high density of measuring points and measurement in a global coordinate system. Relevant modal parameters were extracted from the measured data and provided for verifying the model. The use of the 3D laser vibrometer allowed the establishment of a process chain for experimental examination of the pelvic bones that was optimized with respect to time and effort involved. The transfer functions determined feature good signal quality. Furthermore, a comparison of the results obtained from pairs of pelvic bones showed that repeatable measurements can be obtained with the method used. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Scaling rates of true polar wander in convecting planets and moons

    NASA Astrophysics Data System (ADS)

    Rose, Ian; Buffett, Bruce

    2017-12-01

    Mass redistribution in the convecting mantle of a planet causes perturbations in its moment of inertia tensor. Conservation of angular momentum dictates that these perturbations change the direction of the rotation vector of the planet, a process known as true polar wander (TPW). Although the existence of TPW on Earth is firmly established, its rate and magnitude over geologic time scales remain controversial. Here we present scaling analyses and numerical simulations of TPW due to mantle convection over a range of parameter space relevant to planetary interiors. For simple rotating convection, we identify a set of dimensionless parameters that fully characterize true polar wander. We use these parameters to define timescales for the growth of moment of inertia perturbations due to convection and for their relaxation due to true polar wander. These timescales, as well as the relative sizes of convective anomalies, control the rate and magnitude of TPW. This analysis also clarifies the nature of so called "inertial interchange" TPW events, and relates them to a broader class of events that enable large and often rapid TPW. We expect these events to have been more frequent in Earth's past.

  1. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    NASA Astrophysics Data System (ADS)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more complex hillside is simulated by combining three components: a) a global trend (planar surface), b) local systematic components (sine waves), c) random roughness (Gaussian, zero-mean noise). The parameters for simulating these components are estimated for three typical scenarios of rockfall terrains: soft soil, fine scree and coarse scree, based on expert knowledge and available airborne and terrestrial laser scanning data. For each scenario, the reference terrain is created and used to compute input data for RockyFor3D simulations at different scales, i.e. DTMs with resolutions from 0.5 m to 20 m and associated roughness parameters. Subsequent analysis mainly focuses on the sensitivity of simulations both in terms of run-out envelope and kinetic energy distribution. Guidelines drawn from the results are expected to help experts handle the scale issue while integrating remote sensing data and field measurements of roughness in rockfall simulations.

  2. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  3. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE PAGES

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    2017-09-11

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  4. Electron Impact Multiple Ionization Cross Sections for Solar Physics

    NASA Astrophysics Data System (ADS)

    Hahn, M.; Savin, D. W.; Mueller, A.

    2017-12-01

    We have compiled a set of electron-impact multiple ionization (EIMI) cross sections for astrophysically relevant ions. EIMI can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature, as in solar flares or in nanoflare coronal heating. EIMI is also likely to be significant when the electron energy distribution is non-thermal, such as if the electrons follow a kappa distribution. Cross sections for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to the available experimental EIMI cross section data. Based on this comparison, we have interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for thousands of EIMI cross sections. We also highlight some outstanding issues that remain to be resolved.

  5. Toward Global Harmonization of Derived Cloud Products

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Baum, Bryan A.; Choi, Yong-Sang; Foster, Michael J.; Karlsson, Karl-Goeran; Heidinger, Andrew; Poulsen, Caroline; Pavolonis, Michael; Riedi, Jerome; Roebeling, Robert

    2017-01-01

    Formerly known as the Cloud Retrieval Evaluation Workshop (CREW; see the list of acronyms used in this paper below) group (Roebeling et al. 2013, 2015), the International Cloud Working Group (ICWG) was created and endorsed during the 42nd Meeting of CGMS. The CGMS-ICWG provides a forum for space agencies to seek coherent progress in science and applications and also to act as a bridge between space agencies and the cloud remote sensing and applications community. The ICWG plans to serve as a forum to exchange and enhance knowledge on state-of-the-art cloud parameter retrievals algorithms, to stimulate support for training in the use of cloud parameters, and to encourage space agencies and the cloud remote sensing community to share knowledge. The ICWG plans to prepare recommendations to guide the direction of future research-for example, on observing severe weather events or on process studies-and to influence relevant programs of the WMO, WCRP, GCOS, and the space agencies.

  6. Bacterial diversity patterns of the intertidal biofilm in urban beaches of Río de la Plata.

    PubMed

    Piccini, C; García-Alonso, J

    2015-02-28

    Intertidal benthic ecosystems in estuaries are productive sites where microbial processes play critical roles in nutrients mineralization, primary production and trophic web. In this groundwork study we analyzed the bacterial community of intertidal biofilms from Río de la Plata beaches with different anthropogenic impacts. Several environmental parameters were measured and bacterial assemblages were analyzed by 16S-rDNA pyrosequencing. The average OTU found per sample was 527.3±122.5, showing similar richness and diversity among them. However, sites having the highest and lowest salinity displayed higher bacterial diversity. Assemblages from a site nearby an oil refinery, showing the lowest salinity and oxygen concentration, were clearly distinct from the rest. The weight of this splitting relied on OTUs belonging to Thauera, known by its ability to metabolize aromatic compounds. Our results suggest that intertidal bacterial assemblages would be structured by major estuarine variables such as salinity, and that anthropogenic-induced environmental parameters might also be relevant. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Organic contaminants in onsite wastewater treatment systems

    USGS Publications Warehouse

    Conn, K.E.; Siegrist, R.L.; Barber, L.B.; Brown, G.K.

    2007-01-01

    Wastewater from thirty onsite wastewater treatment systems was sampled during a reconnaissance field study to quantify bulk parameters and the occurrence of organic wastewater contaminants including endocrine disrupting compounds in treatment systems representing a variety of wastewater sources and treatment processes and their receiving environments. Bulk parameters ranged in concentrations representative of the wide variety of wastewater sources (residential vs. non-residential). Organic contaminants such as sterols, surfactant metabolites, antimicrobial agents, stimulants, metal-chelating agents, and other consumer product chemicals, measured by gas chromatography/mass spectrometry were detected frequently in onsite system wastewater. Wastewater composition was unique between source type likely due to differences in source water and chemical usage. Removal efficiencies varied by engineered treatment type and physicochemical properties of the contaminant, resulting in discharge to the soil treatment unit at ecotoxicologically-relevant concentrations. Organic wastewater contaminants were detected less frequently and at lower concentrations in onsite system receiving environments. Understanding the occurrence and fate of organic wastewater contaminants in onsite wastewater treatment systems will aid in minimizing risk to ecological and human health.

  8. Soft Expansion of Double-Real-Virtual Corrections to Higgs Production at N$^3$LO

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2015-05-15

    We present methods to compute higher orders in the threshold expansion for the one-loop production of a Higgs boson in association with two partons at hadron colliders. This process contributes to the N 3LO Higgs production cross section beyond the soft-virtual approximation. We use reverse unitarity to expand the phase-space integrals in the small kinematic parameters and to reduce the coefficients of the expansion to a small set of master integrals. We describe two methods for the calculation of the master integrals. The first was introduced for the calculation of the soft triple-real radiation relevant to N 3LO Higgs production.more » The second uses a particular factorization of the three body phase-space measure and the knowledge of the scaling properties of the integral itself. Our result is presented as a Laurent expansion in the dimensional regulator, although some of the master integrals are computed to all orders in this parameter.« less

  9. Modeling Longitudinal Dynamics in the Fermilab Booster Synchrotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostiguy, Jean-Francois; Bhat, Chandra; Lebedev, Valeri

    2016-06-01

    The PIP-II project will replace the existing 400 MeV linac with a new, CW-capable, 800 MeV superconducting one. With respect to current operations, a 50% increase in beam intensity in the rapid cycling Booster synchrotron is expected. Booster batches are combined in the Recycler ring; this process limits the allowed longitudinal emittance of the extracted Booster beam. To suppress eddy currents, the Booster has no beam pipe; magnets are evacuated, exposing the beam to core laminations and this has a substantial impact on the longitudinal impedance. Noticeable longitudinal emittance growth is already observed at transition crossing. Operation at higher intensitymore » will likely necessitate mitigation measures. We describe systematic efforts to construct a predictive model for current operating conditions. A longitudinal only code including a laminated wall impedance model, space charge effects, and feedback loops is developed. Parameter validation is performed using detailed measurements of relevant beam, rf and control parameters. An attempt is made to benchmark the code at operationally favorable machine settings.« less

  10. Highly integrated autonomous lab-on-a-chip device for on-line and in situ determination of environmental chemical parameters.

    PubMed

    Martinez-Cisneros, Cynthia; da Rocha, Zaira; Seabra, Antonio; Valdés, Francisco; Alonso-Chamarro, Julián

    2018-06-05

    The successful integration of sample pretreatment stages, sensors, actuators and electronics in microfluidic devices enables the attainment of complete micro total analysis systems, also known as lab-on-a-chip devices. In this work, we present a novel monolithic autonomous microanalyzer that integrates microfluidics, electronics, a highly sensitive photometric detection system and a sample pretreatment stage consisting on an embedded microcolumn, all in the same device, for on-line determination of relevant environmental parameters. The microcolumn can be filled/emptied with any resin or powder substrate whenever required, paving the way for its application to several analytical processes: separation, pre-concentration or ionic-exchange. To promote its autonomous operation, avoiding issues caused by bubbles in photometric detection systems, an efficient monolithic bubble removal structure was also integrated. To demonstrate its feasibility, the microanalyzer was successfully used to determine nitrate and nitrite in continuous flow conditions, providing real time and continuous information.

  11. Dissociation coefficients of protein adsorption to nanoparticles as quantitative metrics for description of the protein corona: A comparison of experimental techniques and methodological relevance.

    PubMed

    Hühn, Jonas; Fedeli, Chiara; Zhang, Qian; Masood, Atif; Del Pino, Pablo; Khashab, Niveen M; Papini, Emanuele; Parak, Wolfgang J

    2016-06-01

    Protein adsorption to nanoparticles is described as a chemical reaction in which proteins attach to binding sites on the nanoparticle surface. This process is defined by a dissociation coefficient, which tells how many proteins are adsorbed per nanoparticle in dependence of the protein concentration. Different techniques to experimentally determine dissociation coefficients of protein adsorption to nanoparticles are reviewed. Results of more than 130 experiments in which dissociation coefficients have been determined are compared. Data show that different methods, nanoparticle systems, and proteins can lead to significantly different dissociation coefficients. However, we observed a clear tendency of smaller dissociation coefficients upon less negative towards more positive zeta potentials of the nanoparticles. The zeta potential thus is a key parameter influencing protein adsorption to the surface of nanoparticles. Our analysis highlights the importance of the characterization of the parameters governing protein-nanoparticle interaction for quantitative evaluation and objective literature comparison. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    PubMed

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.

  13. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2015-01-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636

  14. Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line

    PubMed Central

    Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling

    2014-01-01

    The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653

  15. Modelling phosphorus transport and its response to climate change at upper stream of Poyang Lake-the largest fresh water lake in China

    NASA Astrophysics Data System (ADS)

    Jiang, Sanyuan; Zhang, Qi

    2017-04-01

    Phosphorus losses from excessive fertilizer application and improper land exploitation were found to be the limiting factor for freshwater quality deterioration and eutrophication. Phosphorus transport from uplands to river is related to hydrological, soil erosion and sediment transport processes, which is impacted by several physiographic and meteorological factors. The objective of this study was to investigate the spatiotemporal variation of phosphorus losses and response to climate change at a typical upstream tributary (Le'An river) of Poyang Lake. To this end, a process-oriented hydrological and nutrient transport model HYPE (Hydrological Predictions for the Environment) was set up for discharge and phosphorus transport simulation at Le'An catchment. Parameter ESTimator (PEST) was combined with HYPE model for parameter sensitivity analysis and optimisation. In runoff modelling, potential evapotranspiration rate of the dominant land use (forest) is most sensitive; parameters of surface runoff rate and percolation capacity for the red soil are also very sensitive. In phosphorus transport modelling, the exponent of equation for soil erosion processes induced by surface runoff is most sensitive, coefficient of adsorption/desorption processes for red soil is also very sensitive. Flow dynamics and water balance were simulated well at all sites for the whole period (1978-1986) with NSE≥0.80 and PBIAS≤14.53%. The optimized hydrological parameter set were transferable for the independent period (2009-2010) with NSE≥0.90 and highest PBIAS of -7.44% in stream flow simulation. Seasonal dynamics and balance of stream water TP (Total Phosphorus ) concentrations were captured satisfactorily indicated by NSE≥0.53 and highest PBIAS of 16.67%. In annual scale, most phosphorus is transported via surface runoff during heavy storm flow events, which may account for about 70% of annual TP loads. Based on future climate change analysis under three different emission scenarios (RCP 2.6, RCP 4.5 and RCP 8.5), there is no considerable change in average annual rainfall amount in 2020-2035 while increasing occurrence frequency and intensity of extreme rainfall events were predicted. The validated HYPE model was run on the three emission scenarios. Overall increase of TP loads was found in future with the largest increase of annual TP loads under the high emission scenario (RCP 8.5). The outcomes of this study (i) verified the transferability of HYPE model at humid subtropical and heterogeneous catchment; (ii) revealed the sensitive hydrological and phosphorus transport processes and relevant parameters; (iii) implied more TP losses in future in response to increasing extreme rainfall events.

  16. Flame analysis using image processing techniques

    NASA Astrophysics Data System (ADS)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  17. A process economic assessment of hydrocarbon biofuels production using chemoautotrophic organisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, NE; Myers, JA; Tuerk, AL

    Economic analysis of an ARPA-e Electrofuels (http://arpa-e.energy.gov/?q=arpa-e-programs/electrofuels) process is presented, utilizing metabolically engineered Rhodobacter capsulatus or Ralstonia eutropha to produce the C30+ hydrocarbon fuel, botryococcene, from hydrogen, carbon dioxide, and oxygen. The analysis is based on an Aspen plus (R) bioreactor model taking into account experimentally determined Rba. capsulatus and Rls. eutropha growth and maintenance requirements, reactor residence time, correlations for gas-liquid mass-transfer coefficient, gas composition, and specific cellular fuel productivity. Based on reactor simulation results encompassing technically relevant parameter ranges, the capital and operating costs of the process were estimated for 5000 bbl-fuel/day plant and used to predict fuelmore » cost. Under the assumptions used in this analysis and crude oil prices, the Levelized Cost of Electricity (LCOE) required for economic feasibility must be less than 2(sic)/kWh. While not feasible under current market prices and costs, this work identifies key variables impacting process cost and discusses potential alternative paths toward economic feasibility. (C) 2014 Elsevier Ltd. All rights reserved.« less

  18. Multivariate data analysis on historical IPV production data for better process understanding and future improvements.

    PubMed

    Thomassen, Yvonne E; van Sprang, Eric N M; van der Pol, Leo A; Bakker, Wilfried A M

    2010-09-01

    Historical manufacturing data can potentially harbor a wealth of information for process optimization and enhancement of efficiency and robustness. To extract useful data multivariate data analysis (MVDA) using projection methods is often applied. In this contribution, the results obtained from applying MVDA on data from inactivated polio vaccine (IPV) production runs are described. Data from over 50 batches at two different production scales (700-L and 1,500-L) were available. The explorative analysis performed on single unit operations indicated consistent manufacturing. Known outliers (e.g., rejected batches) were identified using principal component analysis (PCA). The source of operational variation was pinpointed to variation of input such as media. Other relevant process parameters were in control and, using this manufacturing data, could not be correlated to product quality attributes. The gained knowledge of the IPV production process, not only from the MVDA, but also from digitalizing the available historical data, has proven to be useful for troubleshooting, understanding limitations of available data and seeing the opportunity for improvements. 2010 Wiley Periodicals, Inc.

  19. Development of a process for efficient use of CO2 from flue gases in the production of photosynthetic microorganisms.

    PubMed

    González-López, C V; Acién Fernández, F G; Fernández-Sevilla, J M; Sánchez Fernández, J F; Molina Grima, E

    2012-07-01

    A new methodology to use efficiently flue gases as CO(2) source in the production of photosynthetic microorganisms is proposed. The CO(2) is absorbed in an aqueous phase that is then regenerated by microalgae. Carbonated solutions could absorb up to 80% of the CO(2) from diluted gas reaching total inorganic carbon (TIC) concentrations up to 2.0 g/L. The pH of the solution was maintained at 8.0-10.0 by the bicarbonate/carbonate buffer, so it is compatible with biological regeneration. The absorption process was modeled and the kinetic parameters were determined. Anabaena sp. demonstrated to tolerate pH (8.0-10.0) and TIC (up to 2.0 g/L) conditions imposed by the absorption step. Experiments of regeneration of the liquid phase demonstrated the feasibility of the overall process, converting CO(2) into organic matter. The developed process avoids heating to regenerate the liquid whereas maximizing the efficiency of CO(2) use, which is relevant to achieve the commercial production of biofuels from microalgae. Copyright © 2012 Wiley Periodicals, Inc.

  20. Characterization and control of fungal morphology for improved production performance in biotechnology.

    PubMed

    Krull, Rainer; Wucherpfennig, Thomas; Esfandabadi, Manely Eslahpazir; Walisko, Robert; Melzer, Guido; Hempel, Dietmar C; Kampen, Ingo; Kwade, Arno; Wittmann, Christoph

    2013-01-20

    Filamentous fungi have been widely applied in industrial biotechnology for many decades. In submerged culture processes, they typically exhibit a complex morphological life cycle that is related to production performance--a link that is of high interest for process optimization. The fungal forms can vary from dense spherical pellets to viscous mycelia. The resulting morphology has been shown to be influenced strongly by process parameters, including power input through stirring and aeration, mass transfer characteristics, pH value, osmolality and the presence of solid micro-particles. The surface properties of fungal spores and hyphae also play a role. Due to their high industrial relevance, the past years have seen a substantial development of tools and techniques to characterize the growth of fungi and obtain quantitative estimates on their morphological properties. Based on the novel insights available from such studies, more recent studies have been aimed at the precise control of morphology, i.e., morphology engineering, to produce superior bio-processes with filamentous fungi. Copyright © 2012 Elsevier B.V. All rights reserved.

Top