Sample records for multi-step direct model

  1. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  2. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  3. Multi-Species Fluxes for the Parallel Quiet Direct Simulation (QDS) Method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Lim, C.-W.; Jermy, M. C.; Krumdieck, S. P.; Smith, M. R.; Lin, Y.-J.; Wu, J.-S.

    2011-05-01

    Fluxes of multiple species are implemented in the Quiet Direct Simulation (QDS) scheme for gas flows. Each molecular species streams independently. All species are brought to local equilibrium at the end of each time step. The multi species scheme is compared to DSMC simulation, on a test case of a Mach 20 flow of a xenon/helium mixture over a forward facing step. Depletion of the heavier species in the bow shock and the near-wall layer are seen. The multi-species QDS code is then used to model the flow in a pulsed-pressure chemical vapour deposition reactor set up for carbon film deposition. The injected gas is a mixture of methane and hydrogen. The temporal development of the spatial distribution of methane over the substrate is tracked.

  4. Technical note: Equivalent genomic models with a residual polygenic effect.

    PubMed

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Distributed-observer-based cooperative control for synchronization of linear discrete-time multi-agent systems.

    PubMed

    Liang, Hongjing; Zhang, Huaguang; Wang, Zhanshan

    2015-11-01

    This paper considers output synchronization of discrete-time multi-agent systems with directed communication topologies. The directed communication graph contains a spanning tree and the exosystem as its root. Distributed observer-based consensus protocols are proposed, based on the relative outputs of neighboring agents. A multi-step algorithm is presented to construct the observer-based protocols. In light of the discrete-time algebraic Riccati equation and internal model principle, synchronization problem is completed. At last, numerical simulation is provided to verify the effectiveness of the theoretical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Coupling of a 3D Finite Element Model of Cardiac Ventricular Mechanics to Lumped Systems Models of the Systemic and Pulmonic Circulation

    PubMed Central

    Kerckhoffs, Roy C. P.; Neal, Maxwell L.; Gu, Quan; Bassingthwaighte, James B.; Omens, Jeff H.; McCulloch, Andrew D.

    2010-01-01

    In this study we present a novel, robust method to couple finite element (FE) models of cardiac mechanics to systems models of the circulation (CIRC), independent of cardiac phase. For each time step through a cardiac cycle, left and right ventricular pressures were calculated using ventricular compliances from the FE and CIRC models. These pressures served as boundary conditions in the FE and CIRC models. In succeeding steps, pressures were updated to minimize cavity volume error (FE minus CIRC volume) using Newton iterations. Coupling was achieved when a predefined criterion for the volume error was satisfied. Initial conditions for the multi-scale model were obtained by replacing the FE model with a varying elastance model, which takes into account direct ventricular interactions. Applying the coupling, a novel multi-scale model of the canine cardiovascular system was developed. Global hemodynamics and regional mechanics were calculated for multiple beats in two separate simulations with a left ventricular ischemic region and pulmonary artery constriction, respectively. After the interventions, global hemodynamics changed due to direct and indirect ventricular interactions, in agreement with previously published experimental results. The coupling method allows for simulations of multiple cardiac cycles for normal and pathophysiology, encompassing levels from cell to system. PMID:17111210

  7. Numerical parametric studies of spray combustion instability

    NASA Technical Reports Server (NTRS)

    Pindera, M. Z.

    1993-01-01

    A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.

  8. Multi-step-ahead crude oil price forecasting using a hybrid grey wave model

    NASA Astrophysics Data System (ADS)

    Chen, Yanhui; Zhang, Chuan; He, Kaijian; Zheng, Aibing

    2018-07-01

    Crude oil is crucial to the operation and economic well-being of the modern society. Huge changes of crude oil price always cause panics to the global economy. There are many factors influencing crude oil price. Crude oil price prediction is still a difficult research problem widely discussed among researchers. Based on the researches on Heterogeneous Market Hypothesis and the relationship between crude oil price and macroeconomic factors, exchange market, stock market, this paper proposes a hybrid grey wave forecasting model, which combines Random Walk (RW)/ARMA to forecast multi-step-ahead crude oil price. More specifically, we use grey wave forecasting model to model the periodical characteristics of crude oil price and ARMA/RW to simulate the daily random movements. The innovation also comes from using the information of the time series graph to forecast crude oil price, since grey wave forecasting is a graphical prediction method. The empirical results demonstrate that based on the daily data of crude oil price, the hybrid grey wave forecasting model performs well in 15- to 20-step-ahead prediction and it always dominates ARMA and Random Walk in correct direction prediction.

  9. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  10. Emergent Stratification in Solid Tumors Selects for Reduced Cohesion of Tumor Cells: A Multi-Cell, Virtual-Tissue Model of Tumor Evolution Using CompuCell3D.

    PubMed

    Swat, Maciej H; Thomas, Gilberto L; Shirinifard, Abbas; Clendenon, Sherry G; Glazier, James A

    2015-01-01

    Tumor cells and structure both evolve due to heritable variation of cell behaviors and selection over periods of weeks to years (somatic evolution). Micro-environmental factors exert selection pressures on tumor-cell behaviors, which influence both the rate and direction of evolution of specific behaviors, especially the development of tumor-cell aggression and resistance to chemotherapies. In this paper, we present, step-by-step, the development of a multi-cell, virtual-tissue model of tumor somatic evolution, simulated using the open-source CompuCell3D modeling environment. Our model includes essential cell behaviors, microenvironmental components and their interactions. Our model provides a platform for exploring selection pressures leading to the evolution of tumor-cell aggression, showing that emergent stratification into regions with different cell survival rates drives the evolution of less cohesive cells with lower levels of cadherins and higher levels of integrins. Such reduced cohesivity is a key hallmark in the progression of many types of solid tumors.

  11. Emergent Stratification in Solid Tumors Selects for Reduced Cohesion of Tumor Cells: A Multi-Cell, Virtual-Tissue Model of Tumor Evolution Using CompuCell3D

    PubMed Central

    Swat, Maciej H.; Thomas, Gilberto L.; Shirinifard, Abbas; Clendenon, Sherry G.; Glazier, James A.

    2015-01-01

    Tumor cells and structure both evolve due to heritable variation of cell behaviors and selection over periods of weeks to years (somatic evolution). Micro-environmental factors exert selection pressures on tumor-cell behaviors, which influence both the rate and direction of evolution of specific behaviors, especially the development of tumor-cell aggression and resistance to chemotherapies. In this paper, we present, step-by-step, the development of a multi-cell, virtual-tissue model of tumor somatic evolution, simulated using the open-source CompuCell3D modeling environment. Our model includes essential cell behaviors, microenvironmental components and their interactions. Our model provides a platform for exploring selection pressures leading to the evolution of tumor-cell aggression, showing that emergent stratification into regions with different cell survival rates drives the evolution of less cohesive cells with lower levels of cadherins and higher levels of integrins. Such reduced cohesivity is a key hallmark in the progression of many types of solid tumors. PMID:26083246

  12. Numerical simulation of machining distortions on a forged aerospace component following a one and a multi-step approaches

    NASA Astrophysics Data System (ADS)

    Prete, Antonio Del; Franchi, Rodolfo; Antermite, Fabrizio; Donatiello, Iolanda

    2018-05-01

    Residual stresses appear in a component as a consequence of thermo-mechanical processes (e.g. ring rolling process) casting and heat treatments. When machining these kinds of components, distortions arise due to the redistribution of residual stresses due to the foregoing process history inside the material. If distortions are excessive, they can lead to a large number of scrap parts. Since dimensional accuracy can affect directly the engines efficiency, the dimensional control for aerospace components is a non-trivial issue. In this paper, the problem related to the distortions of large thin walled aeroengines components in nickel superalloys has been addressed. In order to estimate distortions on inner diameters after internal turning operations, a 3D Finite Element Method (FEM) analysis has been developed on a real industrial test case. All the process history, has been taken into account by developing FEM models of ring rolling process and heat treatments. Three different strategies of ring rolling process have been studied and the combination of related parameters which allows to obtain the best dimensional accuracy has been found. Furthermore, grain size evolution and recrystallization phenomena during manufacturing process has been numerically investigated using a semi empirical Johnson-Mehl-Avrami-Kohnogorov (JMAK) model. The volume subtractions have been simulated by boolean trimming: a one step and a multi step analysis have been performed. The multi-step procedure has allowed to choose the best material removal sequence in order to reduce machining distortions.

  13. Description of bioremediation of soils using the model of a multistep system of microorganisms

    NASA Astrophysics Data System (ADS)

    Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.

    2018-01-01

    The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.

  14. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    PubMed

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  15. A genetically optimized kinetic model for ethanol electro-oxidation on Pt-based binary catalysts used in direct ethanol fuel cells

    NASA Astrophysics Data System (ADS)

    Sánchez-Monreal, Juan; García-Salaberri, Pablo A.; Vera, Marcos

    2017-09-01

    A one-dimensional model is proposed for the anode of a liquid-feed direct ethanol fuel cell. The complex kinetics of the ethanol electro-oxidation reaction is described using a multi-step reaction mechanism that considers free and adsorbed intermediate species on Pt-based binary catalysts. The adsorbed species are modeled using coverage factors to account for the blockage of the active reaction sites on the catalyst surface. The reaction rates are described by Butler-Volmer equations that are coupled to a one-dimensional mass transport model, which incorporates the effect of ethanol and acetaldehyde crossover. The proposed kinetic model circumvents the acetaldehyde bottleneck effect observed in previous studies by incorporating CH3CHOHads among the adsorbed intermediates. A multi-objetive genetic algorithm is used to determine the reaction constants using anode polarization and product selectivity data obtained from the literature. By adjusting the reaction constants using the methodology developed here, different catalyst layers could be modeled and their selectivities could be successfully reproduced.

  16. Objective biofidelity rating of a numerical human occupant model in frontal to lateral impact.

    PubMed

    de Lange, Ronald; van Rooij, Lex; Mooi, Herman; Wismans, Jac

    2005-11-01

    Both hardware crash dummies and mathematical human models have been developed largely using the same biomechanical data. For both, biofidelity is a main requirement. Since numerical modeling is not bound to hardware crash dummy design constraints, it allows more detailed modeling of the human and offering biofidelity for multiple directions. In this study the multi-directional biofidelity of the MADYMO human occupant model is assessed, to potentially protect occupants under various impact conditions. To evaluate the model's biofidelity, generally accepted requirements were used for frontal and lateral impact: tests proposed by EEVC and NHTSA and tests specified by ISO TR9790, respectively. A subset of the specified experiments was simulated with the human model. For lateral impact, the results were objectively rated according to the ISO protocol. Since no rating protocol was available for frontal impact, the ISO rating scheme for lateral was used for frontal, as far as possible. As a result, two scores show the overall model biofidelity for frontal and lateral impact, while individual ratings provide insight in the quality on body segment level. The results were compared with the results published for the THOR and WorldSID dummies, showing that the mathematical model exhibits a high level of multi-directional biofidelity. In addition, the performance of the human model in the NBDL 11G oblique test indicates a valid behavior of the model in intermediate directions as well. A new aspect of this study is the objective assessment of the multi-directional biofidelity of the mathematical human model according to accepted requirements. Although hardware dummies may always be used in regulations, it is expected that virtual testing with human models will serve in extrapolating outside the hardware test environment. This study was a first step towards simulating a wider range of impact conditions, such as angled impact and rollover.

  17. (n,{gamma}) Experiments on tin isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baramsai, B.; Mitchell, G. E.; Walker, C. L.

    2013-04-19

    Neutron capture experiments on highly enriched {sup 117,119}Sn isotopes were performed with the DANCE detector array located at the Los Alamos Neutron Science Center. The DANCE detector provides detailed information about the multi-step {gamma}-ray cascade following neutron capture. Analysis of the experimental data provides important information to improve understanding of the neutron capture reaction, including a test of the statistical model, the assignment of spins and parities of neutron resonances, and information concerning the Photon Strength Function (PSF) and Level Density (LD) below the neutron separation energy. Preliminary results for the (n,{gamma}) reaction on {sup 117,119}Sn are presented. Resonance spinsmore » of the odd-A tin isotopes were almost completely unknown. Resonance spins and parities have been assigned via analysis of the multi-step {gamma}-ray spectra and directional correlations.« less

  18. Multistep cascade annihilations of dark matter and the Galactic Center excess

    DOE PAGES

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.

    2015-05-26

    If dark matter is embedded in a non-trivial dark sector, it may annihilate and decay to lighter dark-sector states which subsequently decay to the Standard Model. Such scenarios - with annihilation followed by cascading dark-sector decays - can explain the apparent excess GeV gamma-rays identified in the central Milky Way, while evading bounds from dark matter direct detection experiments. Each 'step' in the cascade will modify the observable signatures of dark matter annihilation and decay, shifting the resulting photons and other final state particles to lower energies and broadening their spectra. We explore, in a model-independent way, the effect ofmore » multi-step dark-sector cascades on the preferred regions of parameter space to explain the GeV excess. We find that the broadening effects of multi-step cascades can admit final states dominated by particles that would usually produce too sharply peaked photon spectra; in general, if the cascades are hierarchical (each particle decays to substantially lighter particles), the preferred mass range for the dark matter is in all cases 20-150 GeV. Decay chains that have nearly-degenerate steps, where the products are close to half the mass of the progenitor, can admit much higher DM masses. We map out the region of mass/cross-section parameter space where cascades (degenerate, hierarchical or a combination) can fit the signal, for a range of final states. In the current paper, we study multi-step cascades in the context of explaining the GeV excess, but many aspects of our results are general and can be extended to other applications.« less

  19. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  20. Impact of user influence on information multi-step communication in a micro-blog

    NASA Astrophysics Data System (ADS)

    Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken

    2014-06-01

    User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.

  1. Modeling human faces with multi-image photogrammetry

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.

  2. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  3. Evaluating the compatibility of multi-functional and intensive urban land uses

    NASA Astrophysics Data System (ADS)

    Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.

    2007-12-01

    This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).

  4. ATR architecture for multisensor fusion

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark K.; Kipp, Teresa A.

    1996-06-01

    The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.

  5. Integration of topological modification within the modeling of multi-physics systems: Application to a Pogo-stick

    NASA Astrophysics Data System (ADS)

    Abdeljabbar Kharrat, Nourhene; Plateaux, Régis; Miladi Chaabane, Mariem; Choley, Jean-Yves; Karra, Chafik; Haddar, Mohamed

    2018-05-01

    The present work tackles the modeling of multi-physics systems applying a topological approach while proceeding with a new methodology using a topological modification to the structure of systems. Then the comparison with the Magos' methodology is made. Their common ground is the use of connectivity within systems. The comparison and analysis of the different types of modeling show the importance of the topological methodology through the integration of the topological modification to the topological structure of a multi-physics system. In order to validate this methodology, the case of Pogo-stick is studied. The first step consists in generating a topological graph of the system. Then the connectivity step takes into account the contact with the ground. During the last step of this research; the MGS language (Modeling of General System) is used to model the system through equations. Finally, the results are compared to those obtained by MODELICA. Therefore, this proposed methodology may be generalized to model multi-physics systems that can be considered as a set of local elements.

  6. Characterization and multi-step transketolase-ω-transaminase bioconversions in an immobilized enzyme microreactor (IEMR) with packed tube.

    PubMed

    Halim, Amanatuzzakiah Abdul; Szita, Nicolas; Baganz, Frank

    2013-12-01

    The concept of de novo metabolic engineering through novel synthetic pathways offers new directions for multi-step enzymatic synthesis of complex molecules. This has been complemented by recent progress in performing enzymatic reactions using immobilized enzyme microreactors (IEMR). This work is concerned with the construction of de novo designed enzyme pathways in a microreactor synthesizing chiral molecules. An interesting compound, commonly used as the building block in several pharmaceutical syntheses, is a single diastereoisomer of 2-amino-1,3,4-butanetriol (ABT). This chiral amino alcohol can be synthesized from simple achiral substrates using two enzymes, transketolase (TK) and transaminase (TAm). Here we describe the development of an IEMR using His6-tagged TK and TAm immobilized onto Ni-NTA agarose beads and packed into tubes to enable multi-step enzyme reactions. The kinetic parameters of both enzymes were first determined using single IEMRs evaluated by a kinetic model developed for packed bed reactors. The Km(app) for both enzymes appeared to be flow rate dependent, while the turnover number kcat was reduced 3 fold compared to solution-phase TK and TAm reactions. For the multi-step enzyme reaction, single IEMRs were cascaded in series, whereby the first enzyme, TK, catalyzed a model reaction of lithium-hydroxypyruvate (HPA) and glycolaldehyde (GA) to L-erythrulose (ERY), and the second unit of the IEMR with immobilized TAm converted ERY into ABT using (S)-α-methylbenzylamine (MBA) as amine donor. With initial 60mM (HPA and GA each) and 6mM (MBA) substrate concentration mixture, the coupled reaction reached approximately 83% conversion in 20 min at the lowest flow rate. The ability to synthesize a chiral pharmaceutical intermediate, ABT in relatively short time proves this IEMR system as a powerful tool for construction and evaluation of de novo pathways as well as for determination of enzyme kinetics. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less

  8. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    NASA Astrophysics Data System (ADS)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  9. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    PubMed

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all <15%, averagely 12.930%. To the best of our knowledge, it is the first time that LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  10. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    NASA Astrophysics Data System (ADS)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.

  11. Directivity pattern of the sound radiated from axisymmetric stepped plates.

    PubMed

    He, Xiping; Yan, Xiuli; Li, Na

    2016-08-01

    For the purpose of optimal design and efficient utilization of the kind of stepped plate radiator in air, in this contribution, an approach for calculation of the directivity pattern of the sound radiated from a stepped plate in flexural vibration with a free edge is developed based on Kirchhoff-Love hypothesis and Rayleigh integral principle. Experimental tests of directivity pattern for a fabricated flat plate and two fabricated plates with one and two step radiators were carried out. It shows that the configuration of the measured directivity patterns by the proposed analytic approach is similar to those of the calculated approach. Comparison of the agreement between the calculated directivity pattern of a stepped plate and its corresponding theoretical piston show that the former radiator is equivalent to the latter, and the diffraction field generated by the unbaffled upper surface may be small. It also shows that the directivity pattern of a stepped radiator is independent of the metallic material but dependent on the thickness of base plate and resonant frequency. The thicker the thickness of base plate, the more directive the radiation is. The proposed analytic approach in this work may be adopted for any other plates with multi-steps.

  12. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  13. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  14. Improved perovskite phototransistor prepared using multi-step annealing method

    NASA Astrophysics Data System (ADS)

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  15. Continuous In Vitro Evolution of a Ribozyme that Catalyzes Three Successive Nucleotidyl Addition Reactions

    NASA Technical Reports Server (NTRS)

    McGinness, Kathleen E.; Wright, Martin C.; Joyce, Gerald F.

    2002-01-01

    Variants of the class I ligase ribozyme, which catalyzes joining of the 3' end of a template bound oligonucleotide to its own 5' end, have been made to evolve in a continuous manner by a simple serial transfer procedure that can be carried out indefinitely. This process was expanded to allow the evolution of ribozymes that catalyze three successive nucleotidyl addition reactions, two template-directed mononucleotide additions followed by RNA ligation. During the development of this behavior, a population of ribozymes was maintained against an overall dilution of more than 10(exp 406). The resulting ribozymes were capable of catalyzing the three-step reaction pathway, with nucleotide addition occurring in either a 5' yieldig 3' or a 3' yielding 5' direction. This purely chemical system provides a functional model of a multi-step reaction pathway that is undergoing Darwinian evolution.

  16. Complex supramolecular interfacial tessellation through convergent multi-step reaction of a dissymmetric simple organic precursor

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qi; Paszkiewicz, Mateusz; Du, Ping; Zhang, Liding; Lin, Tao; Chen, Zhi; Klyatskaya, Svetlana; Ruben, Mario; Seitsonen, Ari P.; Barth, Johannes V.; Klappenberger, Florian

    2018-03-01

    Interfacial supramolecular self-assembly represents a powerful tool for constructing regular and quasicrystalline materials. In particular, complex two-dimensional molecular tessellations, such as semi-regular Archimedean tilings with regular polygons, promise unique properties related to their nontrivial structures. However, their formation is challenging, because current methods are largely limited to the direct assembly of precursors, that is, where structure formation relies on molecular interactions without using chemical transformations. Here, we have chosen ethynyl-iodophenanthrene (which features dissymmetry in both geometry and reactivity) as a single starting precursor to generate the rare semi-regular (3.4.6.4) Archimedean tiling with long-range order on an atomically flat substrate through a multi-step reaction. Intriguingly, the individual chemical transformations converge to form a symmetric alkynyl-Ag-alkynyl complex as the new tecton in high yields. Using a combination of microscopy and X-ray spectroscopy tools, as well as computational modelling, we show that in situ generated catalytic Ag complexes mediate the tecton conversion.

  17. How to use multi-criteria decision analysis methods for reimbursement decision-making in healthcare: a step-by-step guide.

    PubMed

    Diaby, Vakaramoko; Goeree, Ron

    2014-02-01

    In recent years, the quest for more comprehensiveness, structure and transparency in reimbursement decision-making in healthcare has prompted the research into alternative decision-making frameworks. In this environment, multi-criteria decision analysis (MCDA) is arising as a valuable tool to support healthcare decision-making. In this paper, we present the main MCDA decision support methods (elementary methods, value-based measurement models, goal programming models and outranking models) using a case study approach. For each family of methods, an example of how an MCDA model would operate in a real decision-making context is presented from a critical perspective, highlighting the parameters setting, the selection of the appropriate evaluation model as well as the role of sensitivity and robustness analyses. This study aims to provide a step-by-step guide on how to use MCDA methods for reimbursement decision-making in healthcare.

  18. Gravitational waves from the first order electroweak phase transition in the Z3 symmetric singlet scalar model

    NASA Astrophysics Data System (ADS)

    Matsui, Toshinori

    2018-01-01

    Among various scenarios of baryon asymmetry of the Universe, electroweak baryogenesis is directly connected with physics of the Higgs sector. We discuss spectra of gravitational waves which are originated by the strongly first order phase transition at the electroweak symmetry breaking, which is required for a successful scenario of electroweak baryogenesis. In the Z3 symmetric singlet scalar model, the significant gravitational waves are caused by the multi-step phase transition. We show that the model can be tested by measuring the characteristic spectra of the gravitational waves at future interferometers such as LISA and DECIGO.

  19. BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods

    NASA Astrophysics Data System (ADS)

    Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.

    2017-12-01

    Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made through Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. With the ability to cope with incomplete information and use expert knowledge, as well as inherently providing quantitative uncertainty information, it is shown that loss models based on BNs are superior to deterministic approaches for pluvial flood risk assessment.

  20. How Different kinds of Communication and the Mass Media Affect Tourism.

    DTIC Science & Technology

    1984-12-01

    C. Criticism of the Two-Step Flow Model------------------------------ 38 3. The Multi-Step Flow Model or Theory------- 39 4. One-Step Flow Model or... Criticism of the Two-Step Flow Model Researchers have identified deficiencies in the r,,c-w:er :low-model. McNelly, for instance, se,7s mass...evidence of the relative importance of cc munication on the diffusion flow. 38 Rogers ’as criticized ] the theor-y on the grounds that: neither its

  1. Investigation of obstacle effect to improve conjugate heat transfer in backward facing step channel using fast simulation of incompressible flow

    NASA Astrophysics Data System (ADS)

    Nouri-Borujerdi, Ali; Moazezi, Arash

    2018-01-01

    The current study investigates the conjugate heat transfer characteristics for laminar flow in backward facing step channel. All of the channel walls are insulated except the lower thick wall under a constant temperature. The upper wall includes a insulated obstacle perpendicular to flow direction. The effect of obstacle height and location on the fluid flow and heat transfer are numerically explored for the Reynolds number in the range of 10 ≤ Re ≤ 300. Incompressible Navier-Stokes and thermal energy equations are solved simultaneously in fluid region by the upwind compact finite difference scheme based on flux-difference splitting in conjunction with artificial compressibility method. In the thick wall, the energy equation is obtained by Laplace equation. A multi-block approach is used to perform parallel computing to reduce the CPU time. Each block is modeled separately by sharing boundary conditions with neighbors. The developed program for modeling was written in FORTRAN language with OpenMP API. The obtained results showed that using of the multi-block parallel computing method is a simple robust scheme with high performance and high-order accurate. Moreover, the obtained results demonstrated that the increment of Reynolds number and obstacle height as well as decrement of horizontal distance between the obstacle and the step improve the heat transfer.

  2. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.

  3. Behavioral modeling and digital compensation of nonlinearity in DFB lasers for multi-band directly modulated radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Li, Jianqiang; Yin, Chunjing; Chen, Hao; Yin, Feifei; Dai, Yitang; Xu, Kun

    2014-11-01

    The envisioned C-RAN concept in wireless communication sector replies on distributed antenna systems (DAS) which consist of a central unit (CU), multiple remote antenna units (RAUs) and the fronthaul links between them. As the legacy and emerging wireless communication standards will coexist for a long time, the fronthaul links are preferred to carry multi-band multi-standard wireless signals. Directly-modulated radio-over-fiber (ROF) links can serve as a lowcost option to make fronthaul connections conveying multi-band wireless signals. However, directly-modulated radioover- fiber (ROF) systems often suffer from inherent nonlinearities from directly-modulated lasers. Unlike ROF systems working at the single-band mode, the modulation nonlinearities in multi-band ROF systems can result in both in-band and cross-band nonlinear distortions. In order to address this issue, we have recently investigated the multi-band nonlinear behavior of directly-modulated DFB lasers based on multi-dimensional memory polynomial model. Based on this model, an efficient multi-dimensional baseband digital predistortion technique was developed and experimentally demonstrated for linearization of multi-band directly-modulated ROF systems.

  4. Application of an Evolution Strategy in Planetary Ephemeris Optimization

    NASA Astrophysics Data System (ADS)

    Mai, E.

    2016-12-01

    Classical planetary ephemeris construction comprises three major steps, which are performed iteratively: simultaneous numerical integration of coupled equations of motion of a multi-body system (propagator step), reduction of thousands of observations (reduction step), and optimization of various selected model parameters (adjustment step). This traditional approach is challenged by ongoing refinements in force modeling, e.g. inclusion of much more significant minor bodies, an ever-growing number of planetary observations, e.g. vast amount of spacecraft tracking data, etc. To master the high computational burden and in order to circumvent the need for inversion of huge normal equation matrices, we propose an alternative ephemeris construction method. The main idea is to solve the overall optimization problem by a straightforward direct evaluation of the whole set of mathematical formulas involved, rather than to solve it as an inverse problem with all its tacit mathematical assumptions and numerical difficulties. We replace the usual gradient search by a stochastic search, namely an evolution strategy, the latter of which is also perfect for the exploitation of parallel computing capabilities. Furthermore, this new approach enables multi-criteria optimization and time-varying optima. This issue will become important in future once ephemeris construction is just one part of even larger optimization problems, e.g. the combined and consistent determination of the physical state (orbit, size, shape, rotation, gravity,…) of celestial bodies (planets, satellites, asteroids, or comets), and if one seeks near real-time solutions. Here we outline the general idea and discuss first results. As an example, we present a simultaneous optimization of high-correlated asteroidal ring model parameters (total mass and heliocentric radius), based on simulations.

  5. Model-independent indirect detection constraints on hidden sector dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.

    2016-06-10

    If dark matter inhabits an expanded “hidden sector”, annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e{sup +}e{sup −} and p-barp) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for suchmore » cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  6. Model-independent indirect detection constraints on hidden sector dark matter

    DOE PAGES

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.; ...

    2016-06-10

    If dark matter inhabits an expanded ``hidden sector'', annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e +e - andmore » $$\\overline{p}$$ p) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for such cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  7. A review on machine learning principles for multi-view biological data integration.

    PubMed

    Li, Yifeng; Wu, Fang-Xiang; Ngom, Alioune

    2018-03-01

    Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.

  8. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  9. Estimation of Soil Moisture with L-band Multi-polarization Radar

    NASA Technical Reports Server (NTRS)

    Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.

    2004-01-01

    Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.

  10. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  11. Multi-step production of a diphoton resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrescu, Bogdan A.; Fox, Patrick J.; Kearney, John

    2017-04-24

    Assuming that the mass peak at 750 GeV reported by the ATLAS and CMS Collaborations is due to a spin-0 particle that decays into two photons, we present two weakly-coupled renormalizable models that lead to different production mechanisms. In one model, a scalar particle produced through gluon fusion decays into the diphoton particle and a light, long-lived pseudoscalar. In another model, a $Z'$ boson produced from the annihilation of a strange-antistrange quark pair undergoes a cascade decay that leads to the diphoton particle and two sterile neutrinos. We show that various kinematic distributions may differentiate these models from the canonicalmore » model where the diphoton particle is directly produced in gluon fusion.« less

  12. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  13. Upper Mantle Shear Wave Structure Beneath North America From Multi-mode Surface Wave Tomography

    NASA Astrophysics Data System (ADS)

    Yoshizawa, K.; Ekström, G.

    2008-12-01

    The upper mantle structure beneath the North American continent has been investigated from measurements of multi-mode phase speeds of Love and Rayleigh waves. To estimate fundamental-mode and higher-mode phase speeds of surface waves from a single seismogram at regional distances, we have employed a method of nonlinear waveform fitting based on a direct model-parameter search using the neighbourhood algorithm (Yoshizawa & Kennett, 2002). The method of the waveform analysis has been fully automated by employing empirical quantitative measures for evaluating the accuracy/reliability of estimated multi-mode phase dispersion curves, and thus it is helpful in processing the dramatically increasing numbers of seismic data from the latest regional networks such as USArray. As a first step toward modeling the regional anisotropic shear-wave velocity structure of the North American upper mantle with extended vertical resolution, we have applied the method to long-period three-component records of seismic stations in North America, which mostly comprise the GSN and US regional networks as well as the permanent and transportable USArray stations distributed by the IRIS DMC. Preliminary multi-mode phase-speed models show large-scale patterns of isotropic heterogeneity, such as a strong velocity contrast between the western and central/eastern United States, which are consistent with the recent global and regional models (e.g., Marone, et al. 2007; Nettles & Dziewonski, 2008). We will also discuss radial anisotropy of shear wave speed beneath North America from multi-mode dispersion measurements of Love and Rayleigh waves.

  14. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  15. Self-Regulated Strategy Development Instruction for Teaching Multi-Step Equations to Middle School Students Struggling in Math

    ERIC Educational Resources Information Center

    Cuenca-Carlino, Yojanna; Freeman-Green, Shaqwana; Stephenson, Grant W.; Hauth, Clara

    2016-01-01

    Six middle school students identified as having a specific learning disability or at risk for mathematical difficulties were taught how to solve multi-step equations by using the self-regulated strategy development (SRSD) model of instruction. A multiple-probe-across-pairs design was used to evaluate instructional effects. Instruction was provided…

  16. Variation of nanopore diameter along porous anodic alumina channels by multi-step anodization.

    PubMed

    Lee, Kwang Hong; Lim, Xin Yuan; Wai, Kah Wing; Romanato, Filippo; Wong, Chee Cheong

    2011-02-01

    In order to form tapered nanocapillaries, we investigated a method to vary the nanopore diameter along the porous anodic alumina (PAA) channels using multi-step anodization. By anodizing the aluminum in either single acid (H3PO4) or multi-acid (H2SO4, oxalic acid and H3PO4) with increasing or decreasing voltage, the diameter of the nanopore along the PAA channel can be varied systematically corresponding to the applied voltages. The pore size along the channel can be enlarged or shrunken in the range of 20 nm to 200 nm. Structural engineering of the template along the film growth direction can be achieved by deliberately designing a suitable voltage and electrolyte together with anodization time.

  17. Multi-catalysis cascade reactions based on the methoxycarbonylketene platform: diversity-oriented synthesis of functionalized non-symmetrical malonates for agrochemicals and pharmaceuticals.

    PubMed

    Ramachary, Dhevalapally B; Venkaiah, Chintalapudi; Reddy, Y Vijayendar; Kishor, Mamillapalli

    2009-05-21

    In this paper we describe new multi-catalysis cascade (MCC) reactions for the one-pot synthesis of highly functionalized non-symmetrical malonates. These metal-free reactions are either five-step (olefination/hydrogenation/alkylation/ketenization/esterification) or six-step (olefination/hydrogenation/alkylation/ketenization/esterification/alkylation), and employ aldehydes/ketones, Meldrum's acid, 1,4-dihydropyridine/o-phenylenediamine, diazomethane, alcohols and active ethylene/acetylenes, and involve iminium-, self-, self-, self- and base-catalysis, respectively. Many of the products have direct application in agricultural and pharmaceutical chemistry.

  18. Hydrodynamic model for expansion and collisional relaxation of x-ray laser-excited multi-component nanoplasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saxena, Vikrant, E-mail: vikrant.saxena@desy.de; Hamburg Center for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg; Ziaja, Beata, E-mail: ziaja@mail.desy.de

    The irradiation of an atomic cluster with a femtosecond x-ray free-electron laser pulse results in a nanoplasma formation. This typically occurs within a few hundred femtoseconds. By this time the x-ray pulse is over, and the direct photoinduced processes no longer contributing. All created electrons within the nanoplasma are thermalized. The nanoplasma thus formed is a mixture of atoms, electrons, and ions of various charges. While expanding, it is undergoing electron impact ionization and three-body recombination. Below we present a hydrodynamic model to describe the dynamics of such multi-component nanoplasmas. The model equations are derived by taking the moments ofmore » the corresponding Boltzmann kinetic equations. We include the equations obtained, together with the source terms due to electron impact ionization and three-body recombination, in our hydrodynamic solver. Model predictions for a test case, expanding spherical Ar nanoplasma, are obtained. With this model, we complete the two-step approach to simulate x-ray created nanoplasmas, enabling computationally efficient simulations of their picosecond dynamics. Moreover, the hydrodynamic framework including collisional processes can be easily extended for other source terms and then applied to follow relaxation of any finite non-isothermal multi-component nanoplasma with its components relaxed into local thermodynamic equilibrium.« less

  19. Electron correlations and pre-collision in the re-collision picture of high harmonic generation

    NASA Astrophysics Data System (ADS)

    Mašín, Zdeněk; Harvey, Alex G.; Spanner, Michael; Patchkovskii, Serguei; Ivanov, Misha; Smirnova, Olga

    2018-07-01

    We discuss the seminal three-step model and the re-collision picture in the context of high harmonic generation in molecules. In particular, we stress the importance of multi-electron correlation during the first and the third of the three steps of the process: (1) the strong-field ionization and (3) the recombination. We point out how an accurate account of multi-electron correlations during the third recombination step allows one to gauge the importance of pre-collision: the term coined by Eberly (n.d. private communication) to describe unusual pathways during the first, ionization, step.

  20. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  1. Multiresponse kinetic modelling of Maillard reaction and caramelisation in a heated glucose/wheat flour system.

    PubMed

    Kocadağlı, Tolgahan; Gökmen, Vural

    2016-11-15

    The study describes the kinetics of the formation and degradation of α-dicarbonyl compounds in glucose/wheat flour system heated under low moisture conditions. Changes in the concentrations of glucose, fructose, individual free amino acids, lysine and arginine residues, glucosone, 1-deoxyglucosone, 3-deoxyglucosone, 3,4-dideoxyglucosone, 5-hydroxymethyl-2-furfural, glyoxal, methylglyoxal and diacetyl concentrations were determined to form a multiresponse kinetic model for isomerisation and degradation reactions of glucose. Degradation of Amadori product mainly produced 1-deoxyglucosone. Formation of 3-deoxyglucosone proceeded directly from glucose and also Amadori product degradation. Glyoxal formation was predominant from glucosone while methylglyoxal and diacetyl originated from 1-deoxyglucosone. Formation of 5-hydroxymethyl-2-furfural from fructose was found to be a key step. Multi-response kinetic modelling of Maillard reaction and caramelisation simultaneously indicated quantitatively predominant parallel and consecutive pathways and rate limiting steps by estimating the reaction rate constants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. On the Development of Multi-Step Inverse FEM with Shell Model

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Du, R.

    2005-08-01

    The inverse or one-step finite element approach is increasingly used in the sheet metal stamping industry to predict strain distribution and the initial blank shape in the preliminary design stage. Based on the existing theory, there are two types of method: one is based on the principle of virtual work and the other is based on the principle of extreme work. Much research has been conducted to improve the accuracy of simulation results. For example, based on the virtual work principle, Batoz et al. developed a new method using triangular DKT shell elements. In this new method, the bending and unbending effects are considered. Based on the principle of extreme work, Majlessi and et al. proposed the multi-step inverse approach with membrane elements and applied it to an axis-symmetric part. Lee and et al. presented an axis-symmetric shell element model to solve the similar problem. In this paper, a new multi-step inverse method is introduced with no limitation on the workpiece shape. It is a shell element model based on the virtual work principle. The new method is validated by means of comparing to the commercial software system (PAMSTAMP®). The comparison results indicate that the accuracy is good.

  3. Step-response of a torsional device with multiple discontinuous non-linearities: Formulation of a vibratory experiment

    NASA Astrophysics Data System (ADS)

    Krak, Michael D.; Dreyer, Jason T.; Singh, Rajendra

    2016-03-01

    A vehicle clutch damper is intentionally designed to contain multiple discontinuous non-linearities, such as multi-staged springs, clearances, pre-loads, and multi-staged friction elements. The main purpose of this practical torsional device is to transmit a wide range of torque while isolating torsional vibration between an engine and transmission. Improved understanding of the dynamic behavior of the device could be facilitated by laboratory measurement, and thus a refined vibratory experiment is proposed. The experiment is conceptually described as a single degree of freedom non-linear torsional system that is excited by an external step torque. The single torsional inertia (consisting of a shaft and torsion arm) is coupled to ground through parallel production clutch dampers, which are characterized by quasi-static measurements provided by the manufacturer. Other experimental objectives address physical dimensions, system actuation, flexural modes, instrumentation, and signal processing issues. Typical measurements show that the step response of the device is characterized by three distinct non-linear regimes (double-sided impact, single-sided impact, and no-impact). Each regime is directly related to the non-linear features of the device and can be described by peak angular acceleration values. Predictions of a simplified single degree of freedom non-linear model verify that the experiment performs well and as designed. Accordingly, the benchmark measurements could be utilized to validate non-linear models and simulation codes, as well as characterize dynamic parameters of the device including its dissipative properties.

  4. A new paper-based platform technology for point-of-care diagnostics.

    PubMed

    Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad

    2014-10-21

    Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.

  5. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, J; Gao, H

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  6. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    PubMed

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  7. Flow control using audio tones in resonant microfluidic networks: towards cell-phone controlled lab-on-a-chip devices.

    PubMed

    Phillips, Reid H; Jain, Rahil; Browning, Yoni; Shah, Rachana; Kauffman, Peter; Dinh, Doan; Lutz, Barry R

    2016-08-16

    Fluid control remains a challenge in development of portable lab-on-a-chip devices. Here, we show that microfluidic networks driven by single-frequency audio tones create resonant oscillating flow that is predicted by equivalent electrical circuit models. We fabricated microfluidic devices with fluidic resistors (R), inductors (L), and capacitors (C) to create RLC networks with band-pass resonance in the audible frequency range available on portable audio devices. Microfluidic devices were fabricated from laser-cut adhesive plastic, and a "buzzer" was glued to a diaphragm (capacitor) to integrate the actuator on the device. The AC flowrate magnitude was measured by imaging oscillation of bead tracers to allow direct comparison to the RLC circuit model across the frequency range. We present a systematic build-up from single-channel systems to multi-channel (3-channel) networks, and show that RLC circuit models predict complex frequency-dependent interactions within multi-channel networks. Finally, we show that adding flow rectifying valves to the network creates pumps that can be driven by amplified and non-amplified audio tones from common audio devices (iPod and iPhone). This work shows that RLC circuit models predict resonant flow responses in multi-channel fluidic networks as a step towards microfluidic devices controlled by audio tones.

  8. Evaluation of Long-Term Cloud-Resolving Model Simulations Using Satellite Radiance Observations and Multi-Frequency Satellite Simulators

    NASA Technical Reports Server (NTRS)

    Matsui, Toshihisa; Zeng, Xiping; Tao, Wei-Kuo; Masunaga, Hirohiko; Olson, William S.; Lang, Stephen

    2008-01-01

    This paper proposes a methodology known as the Tropical Rainfall Measuring Mission (TRMM) Triple-Sensor Three-step Evaluation Framework (T3EF) for the systematic evaluation of precipitating cloud types and microphysics in a cloud-resolving model (CRM). T3EF utilizes multi-frequency satellite simulators and novel statistics of multi-frequency radiance and backscattering signals observed from the TRMM satellite. Specifically, T3EF compares CRM and satellite observations in the form of combined probability distributions of precipitation radar (PR) reflectivity, polarization-corrected microwave brightness temperature (Tb), and infrared Tb to evaluate the candidate CRM. T3EF is used to evaluate the Goddard Cumulus Ensemble (GCE) model for cases involving the South China Sea Monsoon Experiment (SCSMEX) and Kwajalein Experiment (KWAJEX). This evaluation reveals that the GCE properly captures the satellite-measured frequencies of different precipitating cloud types in the SCSMEX case but underestimates the frequencies of deep convective and deep stratiform types in the KWAJEX case. Moreover, the GCE tends to simulate excessively large and abundant frozen condensates in deep convective clouds as inferred from the overestimated GCE-simulated radar reflectivities and microwave Tb depressions. Unveiling the detailed errors in the GCE s performance provides the best direction for model improvements.

  9. TEACH-M: A pilot study evaluating an instructional sequence for persons with impaired memory and executive functions.

    PubMed

    Ehlhardt, L A; Sohlberg, M M; Glang, A; Albin, R

    2005-08-10

    The purpose of this pilot study was to evaluate an instructional package that facilitates learning and retention of multi-step procedures for persons with severe memory and executive function impairments resulting from traumatic brain injury. The study used a multiple baseline across participants design. Four participants, two males and two females, ranging in age from 36-58 years, were taught a 7-step e-mail task. The instructional package (TEACH-M) was the experimental intervention and the number of correct e-mail steps learned was the dependent variable. Treatment effects were replicated across the four participants and maintained at 30 days post-treatment. Generalization and social validity data further supported the treatment programme. The results suggest that individuals with severe cognitive impairments are capable of learning new skills. Directions for future research include application of the instructional package to other multi-step procedures.

  10. Multi-contrast MRI registration of carotid arteries based on cross-sectional images and lumen boundaries

    NASA Astrophysics Data System (ADS)

    Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing

    2017-02-01

    Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.

  11. A colored petri nets based workload evaluation model and its validation through Multi-Attribute Task Battery-II.

    PubMed

    Wang, Peng; Fang, Weining; Guo, Beiyuan

    2017-04-01

    This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Long-term memory-based control of attention in multi-step tasks requires working memory: evidence from domain-specific interference

    PubMed Central

    Foerster, Rebecca M.; Carbone, Elena; Schneider, Werner X.

    2014-01-01

    Evidence for long-term memory (LTM)-based control of attention has been found during the execution of highly practiced multi-step tasks. However, does LTM directly control for attention or are working memory (WM) processes involved? In the present study, this question was investigated with a dual-task paradigm. Participants executed either a highly practiced visuospatial sensorimotor task (speed stacking) or a verbal task (high-speed poem reciting), while maintaining visuospatial or verbal information in WM. Results revealed unidirectional and domain-specific interference. Neither speed stacking nor high-speed poem reciting was influenced by WM retention. Stacking disrupted the retention of visuospatial locations, but did not modify memory performance of verbal material (letters). Reciting reduced the retention of verbal material substantially whereas it affected the memory performance of visuospatial locations to a smaller degree. We suggest that the selection of task-relevant information from LTM for the execution of overlearned multi-step tasks recruits domain-specific WM. PMID:24847304

  13. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    NASA Astrophysics Data System (ADS)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  14. Approximate analytical modeling of leptospirosis infection

    NASA Astrophysics Data System (ADS)

    Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani

    2017-11-01

    Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; Nichols III, A L

    The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less

  16. Adaptive MPC based on MIMO ARX-Laguerre model.

    PubMed

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. SpineCreator: a Graphical User Interface for the Creation of Layered Neural Models.

    PubMed

    Cope, A J; Richmond, P; James, S S; Gurney, K; Allerton, D J

    2017-01-01

    There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.

  18. Three-dimensional photonic crystals created by single-step multi-directional plasma etching.

    PubMed

    Suzuki, Katsuyoshi; Kitano, Keisuke; Ishizaki, Kenji; Noda, Susumu

    2014-07-14

    We fabricate 3D photonic nanostructures by simultaneous multi-directional plasma etching. This simple and flexible method is enabled by controlling the ion-sheath in reactive-ion-etching equipment. We realize 3D photonic crystals on single-crystalline silicon wafers and show high reflectance (>95%) and low transmittance (<-15dB) at optical communication wavelengths, suggesting the formation of a complete photonic bandgap. Moreover, our method simply demonstrates Si-based 3D photonic crystals that show the photonic bandgap effect in a shorter wavelength range around 0.6 μm, where further fine structures are required.

  19. Quasi-multi-pulse voltage source converter design with two control degrees of freedom

    NASA Astrophysics Data System (ADS)

    Vural, A. M.; Bayindir, K. C.

    2015-05-01

    In this article, the design details of a quasi-multi-pulse voltage source converter (VSC) switched at line frequency of 50 Hz are given in a step-by-step process. The proposed converter is comprised of four 12-pulse converter units, which is suitable for the simulation of single-/multi-converter flexible alternating current transmission system devices as well as high voltage direct current systems operating at the transmission level. The magnetic interface of the converter is originally designed with given all parameters for 100 MVA operation. The so-called two-angle control method is adopted to control the voltage magnitude and the phase angle of the converter independently. PSCAD simulation results verify both four-quadrant converter operation and closed-loop control of the converter operated as static synchronous compensator (STATCOM).

  20. Implementing the Indiana Model. Indiana Leadership Consortium: Equity through Change.

    ERIC Educational Resources Information Center

    Indiana Leadership Consortium.

    This guide, which was developed as a part of a multi-year, statewide effort to institutionalize gender equity in various educational settings throughout Indiana, presents a step-by-step process model for achieving gender equity in the state's secondary- and postsecondary-level vocational programs through coalition building and implementation of a…

  1. Multi-Sided Markets for Transforming Healthcare Service Delivery.

    PubMed

    Kuziemsky, Craig; Vimarlund, Vivian

    2018-01-01

    Changes in healthcare delivery needs have necessitated the design of new models for connecting providers and consumers of services. While healthcare delivery has traditionally been a push market, multi-sided markets offer the potential for transitioning to a pull market for service delivery. However, there is a need to better understand the business model for multi-sided markets as a first step to using them in healthcare. This paper addressed that need and describes a multi-sided market evaluation framework. Our framework identifies patient, governance and service delivery as three levels of brokerage consideration for evaluating multi-sided markets in healthcare.

  2. Inverting the planning gradient: adjustment of grasps to late segments of multi-step object manipulations.

    PubMed

    Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver

    2017-05-01

    When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.

  3. Multi-step production of a diphoton resonance

    NASA Astrophysics Data System (ADS)

    Dobrescu, Bogdan A.; Fox, Patrick J.; Kearney, John

    2017-06-01

    Among the questions that would be raised by the observation of a new resonance at the LHC, particularly pressing are those concerning the production mechanism: What is the initial state? Is the resonance produced independently or in association with other particles? Here we present two weakly-coupled renormalizable models for production of a diphoton resonance that differ in both their initial and final states. In one model, a scalar particle produced through gluon fusion decays into a diphoton particle and a light, long-lived pseudoscalar. In another model, a {Z}\\prime boson produced from the annihilation of a strange-antistrange quark pair undergoes a cascade decay that leads to a diphoton particle and two sterile neutrinos. Various kinematic distributions may differentiate these models from the canonical model where a diphoton particle is directly produced in gluon fusion.

  4. Quantitative Comparisons of a Coarse-Grid LES with Experimental Data for Backward-Facing Step Flow

    NASA Astrophysics Data System (ADS)

    McDonough, J. M.

    1999-11-01

    A novel approach to LES employing an additive decomposition of both solutions and governing equations (similar to ``multi-level'' approaches of Dubois et al.,Dynamic Multilevel Methods and the Simulation of Turbulence, Cambridge University Press, 1999) is presented; its main structural features are lack of filtering of governing equations (instead, solutions are filtered to remove aliasing due to under resolution) and direct modeling of subgrid-scale primitive variables (rather than modeling their correlations) in the manner proposed by Hylin and McDonough (Int. J. Fluid Mech. Res. 26, 228-256, 1999). A 2-D implementation of this formalism is applied to the backward-facing step flow studied experimentally by Driver and Seegmiller (AIAA J. 23, 163-171, 1985) and Driver et al. (AIAA J. 25, 914-919, 1987), and run on grids sufficiently coarse to permit easy extension to 3-D, industrially-realistic problems. Comparisons of computed and experimental mean quantities (velocity profiles, turbulence kinetic energy, reattachment lengths, etc.) and effects of grid refinement will be presented.

  5. A multi-directional and multi-scale roughness filter to detect lineament segments on digital elevation models - analyzing spatial objects in R

    NASA Astrophysics Data System (ADS)

    Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke

    2016-04-01

    Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.

  6. Ascending Stairway Modeling: A First Step Toward Autonomous Multi-Floor Exploration

    DTIC Science & Technology

    2012-10-01

    Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a...the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor building. Our proposed solution to this problem is...over several steps. However, many ground robots are not capable of traversing tight spiral stairs , and so we do not focus on these types. The stairway is

  7. An online-coupled NWP/ACT model with conserved Lagrangian levels

    NASA Astrophysics Data System (ADS)

    Sørensen, B.; Kaas, E.; Lauritzen, P. H.

    2012-04-01

    Numerical weather and climate modelling is under constant development. Semi-implicit semi-Lagrangian (SISL) models have proven to be numerically efficient in both short-range weather forecasts and climate models, due to the ability to use long time steps. Chemical/aerosol feedback mechanism are becoming more and more relevant in NWP as well as climate models, since the biogenic and anthropogenic emissions can have a direct effect on the dynamics and radiative properties of the atmosphere. To include chemical feedback mechanisms in the NWP models, on-line coupling is crucial. In 3D semi-Lagrangian schemes with quasi-Lagrangian vertical coordinates the Lagrangian levels are remapped to Eulerian model levels each time step. This remapping introduces an undesirable tendency to smooth sharp gradients and creates unphysical numerical diffusion in the vertical distribution. A semi-Lagrangian advection method is introduced, it combines an inherently mass conserving 2D semi-Lagrangian scheme, with a SISL scheme employing both hybrid vertical coordinates and a fully Lagrangian vertical coordinate. This minimizes the vertical diffusion and thus potentially improves the simulation of the vertical profiles of moisture, clouds, and chemical constituents. Since the Lagrangian levels suffer from traditional Lagrangian limitations caused by the convergence and divergence of the flow, remappings to the Eulerian model levels are generally still required - but this need only be applied after a number of time steps - unless dynamic remapping methods are used. For this several different remapping methods has been implemented. The combined scheme is mass conserving, consistent, and multi-tracer efficient.

  8. Influence of Natural Convection and Thermal Radiation Multi-Component Transport in MOCVD Reactors

    NASA Technical Reports Server (NTRS)

    Lowry, S.; Krishnan, A.; Clark, I.

    1999-01-01

    The influence of Grashof and Reynolds number in Metal Organic Chemical Vapor (MOCVD) reactors is being investigated under a combined empirical/numerical study. As part of that research, the deposition of Indium Phosphide in an MOCVD reactor is modeled using the computational code CFD-ACE. The model includes the effects of convection, conduction, and radiation as well as multi-component diffusion and multi-step surface/gas phase chemistry. The results of the prediction are compared with experimental data for a commercial reactor and analyzed with respect to the model accuracy.

  9. High frequency copolymer ultrasonic transducer array of size-effective elements

    NASA Astrophysics Data System (ADS)

    Decharat, Adit; Wagle, Sanat; Habib, Anowarul; Jacobsen, Svein; Melandsø, Frank

    2018-02-01

    A layer-by-layer deposition method for producing dual-layer ultrasonic transducers from piezoelectric copolymers has been developed. The method uses a combination of customized and standard processing to obtain 2D array transducers with electrical connection of the individual elements routed directly to the rear of the substrate. A numerical model was implemented to study basic parameters effecting the transducer characteristics. Key elements of the array were characterized and evaluated, demonstrating its viability of 2D imaging. Signal reproducibility of the prototype array was studied by characterizing the variations of the center frequency (≈42 MHz) and bandwidth (≈25 MHz) of the acoustic. Object identification was also tested and parameterized by acoustic-field beamwidth as well as proper scan step size. Simple tests to illustrate a benefit of multi-element scan on lowering the inspection time were conducted. Structural imaging of the test structure underneath multi-layered wave media (glass plate and distilled water) was also performed. The prototype presented in this work is an important step towards realizing an inexpensive, compact array of individually operated copolymer transducers that can serve in a fast/volumetric high frequency (HF) ultrasonic scanning platform.

  10. Multi-Level Sequential Pattern Mining Based on Prime Encoding

    NASA Astrophysics Data System (ADS)

    Lianglei, Sun; Yun, Li; Jiang, Yin

    Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.

  11. Macro-fingerprint analysis-through-separation of licorice based on FT-IR and 2DCOS-IR

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Wang, Ping; Xu, Changhua; Yang, Yan; Li, Jin; Chen, Tao; Li, Zheng; Cui, Weili; Zhou, Qun; Sun, Suqin; Li, Huifen

    2014-07-01

    In this paper, a step-by-step analysis-through-separation method under the navigation of multi-step IR macro-fingerprint (FT-IR integrated with second derivative IR (SD-IR) and 2DCOS-IR) was developed for comprehensively characterizing the hierarchical chemical fingerprints of licorice from entirety to single active components. Subsequently, the chemical profile variation rules of three parts (flavonoids, saponins and saccharides) in the separation process were holistically revealed and the number of matching peaks and correlation coefficients with standards of pure compounds was increasing along the extracting directions. The findings were supported by UPLC results and a verification experiment of aqueous separation process. It has been demonstrated that the developed multi-step IR macro-fingerprint analysis-through-separation approach could be a rapid, effective and integrated method not only for objectively providing comprehensive chemical characterization of licorice and all its separated parts, but also for rapidly revealing the global enrichment trend of the active components in licorice separation process.

  12. Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ela, Erik; O'Malley, Mark

    2016-05-01

    Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less

  13. Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Peng, Wei; Liu, Song

    2017-10-01

    Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.

  14. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  15. Improved multi-level capability in Si3N4-based resistive switching memory using continuous gradual reset switching

    NASA Astrophysics Data System (ADS)

    Kim, Sungjun; Park, Byung-Gook

    2017-01-01

    In this letter, we compare three different types of reset switching behavior in a bipolar resistive random-access memory (RRAM) system that is housed in a Ni/Si3N4/Si structure. The abrupt, step-like gradual and continuous gradual reset transitions are largely determined by the low-resistance state (LRS). For abrupt reset switching, the large conducting path shows ohmic behavior or has a weak nonlinear current-voltage (I-V) characteristics in the LRS. For gradual switching, including both the step-like and continuous reset types, trap-assisted direct tunneling is dominant in the low-voltage regime, while trap-assisted Fowler-Nordheim tunneling is dominant in the high-voltage regime, thus causing nonlinear I-V characteristics. More importantly, we evaluate the multi-level capabilities of the two different gradual switching types, including both step-like and continuous reset behavior, using identical and incremental voltage conditions. Finer control of the conductance level with good uniformity is achieved in continuous gradual reset switching when compared to that in step-like gradual reset switching. For continuous reset switching, a single conducting path, which initially has a tunneling gap, gradually responds to pulses with even and identical amplitudes, while for step-like reset switching, the multiple conducting paths only respond to incremental pulses to obtain effective multi-level states.

  16. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  17. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... participate in the acquisition. This process should not be used for multi-step acquisitions where it would...

  19. Numerical study of the direct pressure effect of acoustic waves in planar premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, H.; Jimenez, C.

    Recently the unsteady response of 1-D premixed flames to acoustic pressure waves for the range of frequencies below and above the inverse of the flame transit time was investigated experimentally using OH chemiluminescence Wangher (2008). They compared the frequency dependence of the measured response to the prediction of an analytical model proposed by Clavin et al. (1990), derived from the standard flame model (one-step Arrhenius kinetics) and to a similar model proposed by McIntosh (1991). Discrepancies between the experimental results and the model led to the conclusion that the standard model does not provide an adequate description of the unsteadymore » response of real flames and that it is necessary to investigate more realistic chemical models. Here we follow exactly this suggestion and perform numerical studies of the response of lean methane flames using different reaction mechanisms. We find that the global flame response obtained with both detailed chemistry (GRI3.0) and a reduced multi-step model by Peters (1996) lies slightly above the predictions of the analytical model, but is close to experimental results. We additionally used an irreversible one-step Arrhenius reaction model and show the effect of the pressure dependence of the global reaction rate in the flame response. Our results suggest first that the current models have to be extended to capture the amplitude and phase results of the detailed mechanisms, and second that the correlation between the heat release and the measured OH* chemiluminescence should be studied deeper. (author)« less

  20. Multi-scale modelling of non-uniform consolidation of uncured toughened unidirectional prepregs

    NASA Astrophysics Data System (ADS)

    Sorba, G.; Binetruy, C.; Syerko, E.; Leygue, A.; Comas-Cardona, S.; Belnoue, J. P.-H.; Nixon-Pearson, O. J.; Ivanov, D. S.; Hallett, S. R.; Advani, S. G.

    2018-05-01

    Consolidation is a crucial step in manufacturing of composite parts with prepregs because its role is to eliminate inter- and intra-ply gaps and porosity. Some thermoset prepreg systems are toughened with thermoplastic particles. Depending on their size, thermoplastic particles can be either located in between plies or distributed within the inter-fibre regions. When subjected to transverse compaction, resin will bleed out of low-viscosity unidirectional prepregs along the fibre direction, whereas one would expect transverse squeeze flow to dominate for higher viscosity prepregs. Recent experimental work showed that the consolidation of uncured toughened prepregs involves complex flow and deformation mechanisms where both bleeding and squeeze flow patterns are observed [1]. Micrographs of compacted and cured samples confirm these features as shown in Fig.1. A phenomenological model was proposed [2] where bleeding flow and squeeze flow are combined. A criterion for the transition from shear flow to resin bleeding was also proposed. However, the micrographs also reveal a resin rich layer between plies which may be contributing to the complex flow mechanisms during the consolidation process. In an effort to provide additional insight into these complex mechanisms, this work focuses on the 3D numerical modelling of the compaction of uncured toughened prepregs in the cross-ply configuration described in [1]. A transversely isotropic fluid model is used to describe the flow behaviour of the plies coupled with interplay resin flow of an isotropic fluid. The multi-scale flow model used is based on [3, 4]. A numerical parametric study is carried out where the resin viscosity, permeability and inter-ply thickness are varied to identify the role of important variables. The squeezing flow and the bleeding flow are compared for a range of process parameters to investigate the coupling and competition between the two flow mechanisms. Figure 4 shows the predicted displacement of the sample edge with the multi-scale compaction model after one time step [3]. The ply distortion and resin flow observed in Fig.1 is qualitatively retrieved by the computational model.

  1. Evaluation of accuracy in implant site preparation performed in single- or multi-step drilling procedures.

    PubMed

    Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus

    2018-06-01

    Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.

  2. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, John L.

    1990-01-01

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.

  3. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, J.L.

    1990-07-10

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.

  4. Application of the Multi-Doorway Continuum Shell Model to the Magnetic Dipole Strength Distribution in 58Ni

    NASA Astrophysics Data System (ADS)

    Spangenberger, H.; Beck, F.; Richter, A.

    The usual continuum shell model is extended so as to include a statistical treatment of multi-doorway processes. The total configuration space of the nuclear reaction problem is subdivided into the primary doorway states which are coupled by the initial excitation to the nuclear ground state and the secondary doorway states which represent the complicated nature of multi-step reactions. The latter are evaluated within the exciton model which gives the coupling widths between the various finestructure subspaces. This coupling is determined by a statistical factor related to the exciton model and a dynamical factor given by the interaction matrix elements of the interacting excitons. The whole structure defines the multi-doorway continuum shell model. In this work it is applied to the highly fragmented magnetic dipole strength in 58Ni observed in high resolution electron scattering.Translated AbstractAnwendung des Multi-Doorway-Kontinuum-Schalenmodells auf die Verteilung der magnetischen Dipolstärke von 58NiDas Kontinuum-Schalenmodell wurde so erweitert, daß auch statistische Multi-Doorway-Prozesse berücksichtigt werden können. Hierzu wird der Konfigurationsraum unterteilt in den Raum der primären Doorway-Zustände, die direkt aus dem Grundzustand angeregt werden, und den der sekundären Doorway-Zustände, die die komplizierte Struktur der Multi-Step-Reaktionen repräsentieren. Während die primären Doorway-Zustände inclusive ihrer Anregungen mittels üblicher Schalenmodellmethoden beschrieben werden können, werden die sekundären Doorway-Zustände sowie ihre verschiedenen Kopplungen im Rahmen des Exciton-Modells behandelt. Diese Kopplungen sind durch einen aus dem Exciton-Modell resultierenden Faktor sowie durch einen dynamischen Faktor bestimmt, der sich aus dem Matrixelement der wechselwirkenden Excitonen berechnet. Die Struktur der Kopplungen definiert das Multi-Doorway-Kontinuum-Schalenmodell, das hier auf die Beschreibung der stark fragmentierten magnetischen Dipolstärke in 58Ni angewendet wird.

  5. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  6. Large-scale modeling on the fate and transport of polycyclic aromatic hydrocarbons (PAHs) in multimedia over China

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.

    2017-12-01

    In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use

  7. Testing the methodology for dosimetry audit of heterogeneity corrections and small MLC-shaped fields: Results of IAEA multi-center studies

    PubMed Central

    Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S.; Thwaites, David I.; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar

    2016-01-01

    Abstract The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP ‘Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques’ was conducted in 2009–2012 as an extension of previously developed audit programs. Material and methods. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. Results. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Discussion. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs. PMID:26934916

  8. Testing the methodology for dosimetry audit of heterogeneity corrections and small MLC-shaped fields: Results of IAEA multi-center studies.

    PubMed

    Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S; Thwaites, David I; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar

    2016-07-01

    The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP 'Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques' was conducted in 2009-2012 as an extension of previously developed audit programs. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs.

  9. Building dynamic population graph for accurate correspondence detection.

    PubMed

    Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang

    2015-12-01

    In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Stepped Care Versus Direct Face-to-Face Cognitive Behavior Therapy for Social Anxiety Disorder and Panic Disorder: A Randomized Effectiveness Trial.

    PubMed

    Nordgreen, Tine; Haug, Thomas; Öst, Lars-Göran; Andersson, Gerhard; Carlbring, Per; Kvale, Gerd; Tangen, Tone; Heiervang, Einar; Havik, Odd E

    2016-03-01

    The aim of this study was to assess the effectiveness of a cognitive behavioral therapy (CBT) stepped care model (psychoeducation, guided Internet treatment, and face-to-face CBT) compared with direct face-to-face (FtF) CBT. Patients with panic disorder or social anxiety disorder were randomized to either stepped care (n=85) or direct FtF CBT (n=88). Recovery was defined as meeting two of the following three criteria: loss of diagnosis, below cut-off for self-reported symptoms, and functional improvement. No significant differences in intention-to-treat recovery rates were identified between stepped care (40.0%) and direct FtF CBT (43.2%). The majority of the patients who recovered in the stepped care did so at the less therapist-demanding steps (26/34, 76.5%). Moderate to large within-groups effect sizes were identified at posttreatment and 1-year follow-up. The attrition rates were high: 41.2% in the stepped care condition and 27.3% in the direct FtF CBT condition. These findings indicate that the outcome of a stepped care model for anxiety disorders is comparable to that of direct FtF CBT. The rates of improvement at the two less therapist-demanding steps indicate that stepped care models might be useful for increasing patients' access to evidence-based psychological treatments for anxiety disorders. However, attrition in the stepped care condition was high, and research regarding the factors that can improve adherence should be prioritized. Copyright © 2015. Published by Elsevier Ltd.

  11. Lotus-on-chip: computer-aided design and 3D direct laser writing of bioinspired surfaces for controlling the wettability of materials and devices.

    PubMed

    Lantada, Andrés Díaz; Hengsbach, Stefan; Bade, Klaus

    2017-10-16

    In this study we present the combination of a math-based design strategy with direct laser writing as high-precision technology for promoting solid free-form fabrication of multi-scale biomimetic surfaces. Results show a remarkable control of surface topography and wettability properties. Different examples of surfaces inspired on the lotus leaf, which to our knowledge are obtained for the first time following a computer-aided design with this degree of precision, are presented. Design and manufacturing strategies towards microfluidic systems whose fluid driving capabilities are obtained just by promoting a design-controlled wettability of their surfaces, are also discussed and illustrated by means of conceptual proofs. According to our experience, the synergies between the presented computer-aided design strategy and the capabilities of direct laser writing, supported by innovative writing strategies to promote final size while maintaining high precision, constitute a relevant step forward towards materials and devices with design-controlled multi-scale and micro-structured surfaces for advanced functionalities. To our knowledge, the surface geometry of the lotus leaf, which has relevant industrial applications thanks to its hydrophobic and self-cleaning behavior, has not yet been adequately modeled and manufactured in an additive way with the degree of precision that we present here.

  12. Multi-objective optimization of solid waste flows: environmentally sustainable strategies for municipalities.

    PubMed

    Minciardi, Riccardo; Paolucci, Massimo; Robba, Michela; Sacile, Roberto

    2008-11-01

    An approach to sustainable municipal solid waste (MSW) management is presented, with the aim of supporting the decision on the optimal flows of solid waste sent to landfill, recycling and different types of treatment plants, whose sizes are also decision variables. This problem is modeled with a non-linear, multi-objective formulation. Specifically, four objectives to be minimized have been taken into account, which are related to economic costs, unrecycled waste, sanitary landfill disposal and environmental impact (incinerator emissions). An interactive reference point procedure has been developed to support decision making; these methods are considered appropriate for multi-objective decision problems in environmental applications. In addition, interactive methods are generally preferred by decision makers as they can be directly involved in the various steps of the decision process. Some results deriving from the application of the proposed procedure are presented. The application of the procedure is exemplified by considering the interaction with two different decision makers who are assumed to be in charge of planning the MSW system in the municipality of Genova (Italy).

  13. Stakeholder conceptualisation of multi-level HIV and AIDS determinants in a Black epicentre.

    PubMed

    Brawner, Bridgette M; Reason, Janaiya L; Hanlon, Kelsey; Guthrie, Barbara; Schensul, Jean J

    2017-09-01

    HIV has reached epidemic proportions among African Americans in the USA but certain urban contexts appear to experience a disproportionate disease burden. Geographic information systems mapping in Philadelphia indicates increased HIV incidence and prevalence in predominantly Black census tracts, with major differences across adjacent communities. What factors shape these geographic HIV disparities among Black Philadelphians? This descriptive study was designed to refine and validate a conceptual model developed to better understand multi-level determinants of HIV-related risk among Black Philadelphians. We used an expanded ecological approach to elicit reflective perceptions from administrators, direct service providers and community members about individual, social and structural factors that interact to protect against or increase the risk for acquiring HIV within their community. Gender equity, social capital and positive cultural mores (e.g., monogamy, abstinence) were seen as the main protective factors. Historical negative contributory influences of racial residential segregation, poverty and incarceration were among the most salient risk factors. This study was a critical next step toward initiating theory-based, multi-level community-based HIV prevention initiatives.

  14. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  15. Individualized Inservice Teacher Education (Project In-Step). Evaluation Report. Phase III.

    ERIC Educational Resources Information Center

    Thurber, John C.

    This is a report on the third phase of Project IN-STEP, which was intended to develop a viable model for individualized, multi-media in-service teacher education programs. (Phase I and II are reported in ED 033 905, and ED 042 709). The rationale for Phase III was to see if the model could be successfully transferred to an area other than teaching…

  16. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  17. Parametric Modeling Investigation of a Radially-Staged Low-Emission Aviation Combustor

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.

    2016-01-01

    Aviation gas-turbine combustion demands high efficiency, wide operability and minimal trace gas emissions. Performance critical design parameters include injector geometry, combustor layout, fuel-air mixing and engine cycle conditions. The present investigation explores these factors and their impact on a radially staged low-emission aviation combustor sized for a next-generation 24,000-lbf-thrust engine. By coupling multi-fidelity computational tools, a design exploration was performed using a parameterized annular combustor sector at projected 100% takeoff power conditions. Design objectives included nitrogen oxide emission indices and overall combustor pressure loss. From the design space, an optimal configuration was selected and simulated at 7.1, 30 and 85% part-power operation, corresponding to landing-takeoff cycle idle, approach and climb segments. All results were obtained by solution of the steady-state Reynolds-averaged Navier-Stokes equations. Species concentrations were solved directly using a reduced 19-step reaction mechanism for Jet-A. Turbulence closure was obtained using a nonlinear K-epsilon model. This research demonstrates revolutionary combustor design exploration enabled by multi-fidelity physics-based simulation.

  18. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    NASA Astrophysics Data System (ADS)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  19. An integrated approach to patient-specific predictive modeling for single ventricle heart palliation.

    PubMed

    Corsini, Chiara; Baker, Catriona; Kung, Ethan; Schievano, Silvia; Arbia, Gregory; Baretta, Alessia; Biglino, Giovanni; Migliavacca, Francesco; Dubini, Gabriele; Pennati, Giancarlo; Marsden, Alison; Vignon-Clementel, Irene; Taylor, Andrew; Hsia, Tain-Yen; Dorfman, Adam

    2014-01-01

    In patients with congenital heart disease and a single ventricle (SV), ventricular support of the circulation is inadequate, and staged palliative surgery (usually 3 stages) is needed for treatment. In the various palliative surgical stages individual differences in the circulation are important and patient-specific surgical planning is ideal. In this study, an integrated approach between clinicians and engineers has been developed, based on patient-specific multi-scale models, and is here applied to predict stage 2 surgical outcomes. This approach involves four distinct steps: (1) collection of pre-operative clinical data from a patient presenting for SV palliation, (2) construction of the pre-operative model, (3) creation of feasible virtual surgical options which couple a three-dimensional model of the surgical anatomy with a lumped parameter model (LPM) of the remainder of the circulation and (4) performance of post-operative simulations to aid clinical decision making. The pre-operative model is described, agreeing well with clinical flow tracings and mean pressures. Two surgical options (bi-directional Glenn and hemi-Fontan operations) are virtually performed and coupled to the pre-operative LPM, with the hemodynamics of both options reported. Results are validated against postoperative clinical data. Ultimately, this work represents the first patient-specific predictive modeling of stage 2 palliation using virtual surgery and closed-loop multi-scale modeling.

  20. Governance for public health and health equity: The Tröndelag model for public health work.

    PubMed

    Lillefjell, Monica; Magnus, Eva; Knudtsen, Margunn SkJei; Wist, Guri; Horghagen, Sissel; Espnes, Geir Arild; Maass, Ruca; Anthun, Kirsti Sarheim

    2018-06-01

    Multi-sectoral governance of population health is linked to the realization that health is the property of many societal systems. This study aims to contribute knowledge and methods that can strengthen the capacities of municipalities regarding how to work more systematically, knowledge-based and multi-sectoral in promoting health and health equity in the population. Process evaluation was conducted, applying a mixed-methods research design, combining qualitative and quantitative data collection methods. Processes strengthening systematic and multi-sectoral development, implementation and evaluation of research-based measures to promote health, quality of life, and health equity in, for and with municipalities were revealed. A step-by-step model, that emphasizes the promotion of knowledge-based, systematic, multi-sectoral public health work, as well as joint ownership of local resources, initiatives and policies has been developed. Implementation of systematic, knowledge-based and multi-sectoral governance of public health measures in municipalities demand shared understanding of the challenges, updated overview of the population health and impact factors, anchoring in plans, new skills and methods for selection and implementation of measures, as well as development of trust, ownership, shared ethics and goals among those involved.

  1. Scenario driven data modelling: a method for integrating diverse sources of data and data streams

    DOEpatents

    Brettin, Thomas S.; Cottingham, Robert W.; Griffith, Shelton D.; Quest, Daniel J.

    2015-09-08

    A system and method of integrating diverse sources of data and data streams is presented. The method can include selecting a scenario based on a topic, creating a multi-relational directed graph based on the scenario, identifying and converting resources in accordance with the scenario and updating the multi-directed graph based on the resources, identifying data feeds in accordance with the scenario and updating the multi-directed graph based on the data feeds, identifying analytical routines in accordance with the scenario and updating the multi-directed graph using the analytical routines and identifying data outputs in accordance with the scenario and defining queries to produce the data outputs from the multi-directed graph.

  2. Look and Feel: Haptic Interaction for Biomedicine

    DTIC Science & Technology

    1995-10-01

    algorithm that is evaluated within the topology of the model. During each time step, forces are summed for each mobile atom based on external forces...volumetric properties; (b) conserving computation power by rendering media local to the interaction point; and (c) evaluating the simulation within...alteration of the model topology. Simulation of the DSM state is accomplished by a multi-step algorithm that is evaluated within the topology of the

  3. Deciphering the Possible Role of Strain Path on the Evolution of Microstructure, Texture, and Magnetic Properties in a Fe-Cr-Ni Alloy

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Khatirkar, Rajesh Kisni; Gupta, Aman; Shekhawat, Satish K.; Suwas, Satyam

    2018-06-01

    In the present work, the influence of strain path on the evolution of microstructure, crystallographic texture, and magnetic properties of a two-phase Fe-Cr-Ni alloy was investigated. The Fe-Cr-Ni alloy had nearly equal proportion of austenite and ferrite and was cold rolled up to a true strain of 1.6 (thickness reduction) using two different strain paths—unidirectional rolling and multi-step cross rolling. The microstructures were characterized by scanning electron microscopy (SEM) and electron backscattered diffraction (EBSD), while crystallographic textures were determined using X-ray diffraction. For magnetic characterization, B-H loops and M-H curves were measured and magnetic force microscopy was performed. After unidirectional rolling, ferrite showed the presence of strong α-fiber (rolling direction, RD//<110>) and austenite showed strong brass type texture (consisting of Brass (Bs) ({110}<112>), Goss ({110}<001>), and S ({123}<634>)). After multi-step cross rolling, strong rotated cube ({100}<110>) was developed in ferrite, while austenite showed ND (normal direction) rotated brass ( 10 deg) texture. The strain-induced martensite (SIM) was found to be higher in unidirectionally rolled samples than multi-step cross-rolled samples. The coherently diffracting domain size, micro-strain, coercivity, and core loss also showed a strong correlation with strain and strain path. More strain was partitioned into austenite than ferrite during deformation (unidirectional as well as cross rolling). Further, the strain partitioning (in both austenite and ferrite) was found to be higher in unidirectionally rolled samples.

  4. Ambivalence, communication and past use: understanding what influences women's intentions to use contraceptives.

    PubMed

    Campo, Shelly; Askelson, Natoshia M; Spies, Erica L; Losch, Mary

    2012-01-01

    Unintended pregnancy among women in the 18-30 age group is a public health concern. The Extended Parallel Process Model (EPPM) provides a framework for exploring how women's perceptions of threat, efficacy, and fear influence intentions to use contraceptives. Past use and communication with best friends and partners were also considered. A telephone survey of 18-30-year-old women (N = 599) was completed. After univariate and bivariate analyses were conducted, the variables were entered into a hierarchal, multi-variate linear regression with three steps consistent with the EPPM to predict behavioral intention. The first step included the demographic variables of relationship status and income. The constructs for the EPPM were entered into step 2. Step 3 contained the fear measure. The model for the third step was significant, F(10,471) = 36.40, p < 0.001 and the variance explained by this complete model was 0.42. Results suggest that perceived severity of the consequences of an unintended pregnancy (p < 0.01), communication with friends (p < 0.01) and last sexual partner (p < 0.05), relationship status (p < 0.01), and past use (p < 0.001) were associated with women's intentions to use contraceptives. A woman's perception of the severity was related to her intention to use contraceptives. Half of the women (50.3%) reported ambivalence about the severity of an unintended pregnancy. In our study, talking with their last sexual partner had a positive effect on intentions to use contraceptives, while talking with friends influenced intentions in a negative direction. These results reconfirm the need for public health practitioners and health care providers to consider level of ambivalence toward unintended pregnancy, communication with partner, and relationship status when trying to improve women's contraceptive behaviors. Implications for effective communication interventions are discussed.

  5. Foundations of modeling in cryobiology-I: concentration, Gibbs energy, and chemical potential relationships.

    PubMed

    Anderson, Daniel M; Benson, James D; Kearsley, Anthony J

    2014-12-01

    Mathematical modeling plays an enormously important role in understanding the behavior of cells, tissues, and organs undergoing cryopreservation. Uses of these models range from explanation of phenomena, exploration of potential theories of damage or success, development of equipment, and refinement of optimal cryopreservation/cryoablation strategies. Over the last half century there has been a considerable amount of work in bio-heat and mass-transport, and these models and theories have been readily and repeatedly applied to cryobiology with much success. However, there are significant gaps between experimental and theoretical results that suggest missing links in models. One source for these potential gaps is that cryobiology is at the intersection of several very challenging aspects of transport theory: it couples multi-component, moving boundary, multiphase solutions that interact through a semipermeable elastic membrane with multicomponent solutions in a second time-varying domain, during a two-hundred Kelvin temperature change with multi-molar concentration gradients and multi-atmosphere pressure changes. In order to better identify potential sources of error, and to point to future directions in modeling and experimental research, we present a three part series to build from first principles a theory of coupled heat and mass transport in cryobiological systems accounting for all of these effects. The hope of this series is that by presenting and justifying all steps, conclusions may be made about the importance of key assumptions, perhaps pointing to areas of future research or model development, but importantly, lending weight to standard simplification arguments that are often made in heat and mass transport. In this first part, we review concentration variable relationships, their impact on choices for Gibbs energy models, and their impact on chemical potentials. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  7. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  8. Assessment of suturing in the vertical plane shows the efficacy of the multi-degree-of-freedom needle driver for neonatal laparoscopy.

    PubMed

    Takazawa, Shinya; Ishimaru, Tetsuya; Fujii, Masahiro; Harada, Kanako; Sugita, Naohiko; Mitsuishi, Mamoru; Iwanaka, Tadashi

    2013-11-01

    We have developed a thin needle driver with multiple degrees-of-freedom (DOFs) for neonatal laparoscopic surgery. The tip of this needle driver has three DOFs for grasp, deflection and rotation. Our aim was to evaluate the performance of the multi-DOF needle driver in vertical plane suturing. Six pediatric surgeons performed four directional suturing tasks in the vertical plane using the multi-DOF needle driver and a conventional one. Assessed parameters were the accuracy of insertion and exit, the depth of suture, the inclination angle of the needle and the force applied on the model. In left and right direction sutures, the inclination angle of the needle with the multi-DOF needle driver was significantly smaller than that with the conventional one (p = 0.014, 0.042, respectively). In left and right direction sutures, the force for pulling the model with the multi-DOF needle driver was smaller than that with the conventional one (p = 0.036, 0.010, respectively). This study showed that multi-directional suturing on a vertical plane using the multi-DOF needle driver had better needle trajectories and was less invasive as compared to a conventional needle driver.

  9. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  10. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  11. Comparing observations and morphodynamic numerical modeling of upper-flow-regime bedforms in fjords and outcrop

    NASA Astrophysics Data System (ADS)

    Hubbard, Stephen; Kostic, Svetlana; Englert, Rebecca; Coutts, Daniel; Covault, Jacob

    2017-04-01

    Recent bathymetric observations of fjord prodeltas in British Columbia, Canada, reveal evidence for multi-phase channel erosion and deposition. These processes are interpreted to be related to the upstream migration of upper-flow-regime bedforms, namely cyclic steps. We integrate data from high-resolution bathymetric surveys and monitoring to inform morphodynamic numerical models of turbidity currents and associated bedforms in the Squamish prodelta. These models are applied to the interpretation of upper-flow-regime bedforms, including cyclic steps, antidunes, and/or transitional bedforms, in Late Cretaceous submarine conduit strata of the Nanaimo Group at Gabriola Island, British Columbia. In the Squamish prodelta, as bedforms migrate, >90% of the deposits are reworked, making morphology- and facies-based recognition challenging. Sedimentary bodies are 5-30 m long, 0.5-2 m thick and <30 m wide. The Nanaimo Group comprises scour fills of similar scale composed of structureless sandstone, with laminated siltstone locally overlying basal erosion surfaces. Backset stratification is locally observed; packages of 2-4 backset beds, each of which are up to 60 cm thick and up to 15 m long (along dip), commonly share composite basal erosion surfaces. Numerous scour fills are recognized over thin sections (<4 m), indicating limited aggradation and preservation of the bedforms. Preliminary morphodynamic numerical modeling indicates that Squamish and Nanaimo bedforms could be transitional upper-flow-regime bedforms between cyclic steps and antidunes. It is likely that cyclic steps and related upper-flow-regime bedforms are common in strata deposited on high gradient submarine slopes. Evidence for updip-migrating cyclic step and related deposits inform a revised interpretation of a high gradient setting dominated by supercritical flow, or alternating supercritical and subcritical flow in the Nanaimo Group. Integrating direct observations, morphodynamic numerical modeling, and outcrop characterization better constrains fundamental processes that operate in deep-water depositional systems; our analyses aims to further deduce the stratigraphy and preservation potential of upper flow-regime bedforms.

  12. Immobilised enzyme microreactor for screening of multi-step bioconversions: characterisation of a de novo transketolase-ω-transaminase pathway to synthesise chiral amino alcohols.

    PubMed

    Matosevic, S; Lye, G J; Baganz, F

    2011-09-20

    Complex molecules are synthesised via a number of multi-step reactions in living cells. In this work, we describe the development of a continuous flow immobilized enzyme microreactor platform for use in evaluation of multi-step bioconversion pathways demonstrating a de novo transketolase/ω-transaminase-linked asymmetric amino alcohol synthesis. The prototype dual microreactor is based on the reversible attachment of His₆-tagged enzymes via Ni-NTA linkage to two surface derivatised capillaries connected in series. Kinetic parameters established for the model transketolase (TK)-catalysed conversion of lithium-hydroxypyruvate (Li-HPA) and glycolaldehyde (GA) to L-erythrulose using a continuous flow system with online monitoring of reaction output was in good agreement with kinetic parameters determined for TK in stop-flow mode. By coupling the transketolase catalysed chiral ketone forming reaction with the biocatalytic addition of an amine to the TK product using a transaminase (ω-TAm) it is possible to generate chiral amino alcohols from achiral starting compounds. We demonstrated this in a two-step configuration, where the TK reaction was followed by the ω-TAm-catalysed amination of L-erythrulose to synthesise 2-amino-1,3,4-butanetriol (ABT). Synthesis of the ABT product via the dual reaction and the on-line monitoring of each component provided a full profile of the de novo two-step bioconversion and demonstrated the utility of this microreactor system to provide in vitro multi-step pathway evaluation. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Using the Binary Phase-Field Crystal Model to Describe Non-Classical Nucleation Pathways in Gold Nanoparticles

    NASA Astrophysics Data System (ADS)

    Smith, Nathan; Provatas, Nikolas

    Recent experimental work has shown that gold nanoparticles can precipitate from an aqueous solution through a non-classical, multi-step nucleation process. This multi-step process begins with spinodal decomposition into solute-rich and solute-poor liquid domains followed by nucleation from within the solute-rich domains. We present a binary phase-field crystal theory that shows the same phenomology and examine various cross-over regimes in the growth and coarsening of liquid and solid domains. We'd like to the thank Canada Research Chairs (CRC) program for funding this work.

  14. Direct electrochemistry of cytochrome c immobilized on titanium nitride/multi-walled carbon nanotube composite for amperometric nitrite biosensor.

    PubMed

    Haldorai, Yuvaraj; Hwang, Seung-Kyu; Gopalan, Anantha-Iyengar; Huh, Yun Suk; Han, Young-Kyu; Voit, Walter; Sai-Anand, Gopalan; Lee, Kwang-Pill

    2016-05-15

    In this report, titanium nitride (TiN) nanoparticles decorated multi-walled carbon nanotube (MWCNTs) nanocomposite is fabricated via a two-step process. These two steps involve the decoration of titanium dioxide nanoparticles onto the MWCNTs surface and a subsequent thermal nitridation. Transmission electron microscopy shows that TiN nanoparticles with a mean diameter of ≤ 20 nm are homogeneously dispersed onto the MWCNTs surface. Direct electrochemistry and electrocatalysis of cytochrome c immobilized on the MWCNTs-TiN composite modified on a glassy carbon electrode for nitrite sensing are investigated. Under optimum conditions, the current response is linear to its concentration from 1 µM to 2000 µM with a sensitivity of 121.5 µA µM(-1)cm(-2) and a low detection limit of 0.0014 µM. The proposed electrode shows good reproducibility and long-term stability. The applicability of the as-prepared biosensor is validated by the successful detection of nitrite in tap and sea water samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Random regression models on Legendre polynomials to estimate genetic parameters for weights from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Albuquerque, L G; Alencar, M M

    2010-08-01

    The objective of this work was to estimate covariance functions for direct and maternal genetic effects, animal and maternal permanent environmental effects, and subsequently, to derive relevant genetic parameters for growth traits in Canchim cattle. Data comprised 49,011 weight records on 2435 females from birth to adult age. The model of analysis included fixed effects of contemporary groups (year and month of birth and at weighing) and age of dam as quadratic covariable. Mean trends were taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were allowed to vary and were modelled by a step function with 1, 4 or 11 classes based on animal's age. The model fitting four classes of residual variances was the best. A total of 12 random regression models from second to seventh order were used to model direct and maternal genetic effects, animal and maternal permanent environmental effects. The model with direct and maternal genetic effects, animal and maternal permanent environmental effects fitted by quadric, cubic, quintic and linear Legendre polynomials, respectively, was the most adequate to describe the covariance structure of the data. Estimates of direct and maternal heritability obtained by multi-trait (seven traits) and random regression models were very similar. Selection for higher weight at any age, especially after weaning, will produce an increase in mature cow weight. The possibility to modify the growth curve in Canchim cattle to obtain animals with rapid growth at early ages and moderate to low mature cow weight is limited.

  16. Nanogenerators consisting of direct-grown piezoelectrics on multi-walled carbon nanotubes using flexoelectric effects

    PubMed Central

    Han, Jin Kyu; Jeon, Do Hyun; Cho, Sam Yeon; Kang, Sin Wook; Yang, Sun A.; Bu, Sang Don; Myung, Sung; Lim, Jongsun; Choi, Moonkang; Lee, Minbaek; Lee, Min Ku

    2016-01-01

    We report the first attempt to prepare a flexoelectric nanogenerator consisting of direct-grown piezoelectrics on multi-walled carbon nanotubes (mwCNT). Direct-grown piezoelectrics on mwCNTs are formed by a stirring and heating method using a Pb(Zr0.52Ti0.48)O3 (PZT)-mwCNT precursor solution. We studied the unit cell mismatch and strain distribution of epitaxial PZT nanoparticles, and found that lattice strain is relaxed along the growth direction. A PZT-mwCNT nanogenerator was found to produce a peak output voltage of 8.6 V and an output current of 47 nA when a force of 20 N is applied. Direct-grown piezoelectric nanogenerators generate a higher voltage and current than simple mixtures of PZT and CNTs resulting from the stronger connection between PZT crystals and mwCNTs and an enhanced flexoelectric effect caused by the strain gradient. These experiments represent a significant step toward the application of nanogenerators using piezoelectric nanocomposite materials. PMID:27406631

  17. Are Physics-Based Simulators Ready for Prime Time? Comparisons of RSQSim with UCERF3 and Observations.

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.

    2017-12-01

    Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.

  18. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less

  19. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  20. A Multi-Stage Model for Fundamental Functional Properties in Primary Visual Cortex

    PubMed Central

    Hesam Shariati, Nastaran; Freeman, Alan W.

    2012-01-01

    Many neurons in mammalian primary visual cortex have properties such as sharp tuning for contour orientation, strong selectivity for motion direction, and insensitivity to stimulus polarity, that are not shared with their sub-cortical counterparts. Successful models have been developed for a number of these properties but in one case, direction selectivity, there is no consensus about underlying mechanisms. We here define a model that accounts for many of the empirical observations concerning direction selectivity. The model describes a single column of cat primary visual cortex and comprises a series of processing stages. Each neuron in the first cortical stage receives input from a small number of on-centre and off-centre relay cells in the lateral geniculate nucleus. Consistent with recent physiological evidence, the off-centre inputs to cortex precede the on-centre inputs by a small (∼4 ms) interval, and it is this difference that confers direction selectivity on model neurons. We show that the resulting model successfully matches the following empirical data: the proportion of cells that are direction selective; tilted spatiotemporal receptive fields; phase advance in the response to a stationary contrast-reversing grating stepped across the receptive field. The model also accounts for several other fundamental properties. Receptive fields have elongated subregions, orientation selectivity is strong, and the distribution of orientation tuning bandwidth across neurons is similar to that seen in the laboratory. Finally, neurons in the first stage have properties corresponding to simple cells, and more complex-like cells emerge in later stages. The results therefore show that a simple feed-forward model can account for a number of the fundamental properties of primary visual cortex. PMID:22496811

  1. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  2. 3-D Wave-Structure Interaction with Coastal Sediments - A Multi-Physics/Multi-Solution Techniques Approach

    DTIC Science & Technology

    2007-01-01

    Stokes (RANS) and the particle finite element method ( PFEM ) will be used in the water/mine/sand domain. Sand and the geomaterials around the sand will...wave propagation over a bottom mine at various time steps (Soil and Foam model) 8 SOLID/FEM SAND/SPH GEOMATERIALS FNPF/BEM FNPF/BEM RANS/ PFEM

  3. Effect of Turbulent Fluctuations on Infrared Radiation from a Tactical Missile Plume

    DTIC Science & Technology

    1982-02-01

    Reacting Flows ...... 21 Reacting Flow Calculations ..................................... 21 Turbulence- Chemistry Interaction...a two-equation, turbulence kinetic energy model. The code is capable of handling multi-species, multi-step chemistry . However, it does not calculate...that are expected to be important in turbulence- chemistry and turbulence-radiation interactions. The program calculates only two turbulence guantities

  4. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  5. Reynolds-averaged Navier-Stokes based ice accretion for aircraft wings

    NASA Astrophysics Data System (ADS)

    Lashkajani, Kazem Hasanzadeh

    This thesis addresses one of the current issues in flight safety towards increasing icing simulation capabilities for prediction of complex 2D and 3D glaze ice shapes over aircraft surfaces. During the 1980's and 1990's, the field of aero-icing was established to support design and certification of aircraft flying in icing conditions. The multidisciplinary technologies used in such codes were: aerodynamics (panel method), droplet trajectory calculations (Lagrangian framework), thermodynamic module (Messinger model) and geometry module (ice accretion). These are embedded in a quasi-steady module to simulate the time-dependent ice accretion process (multi-step procedure). The objectives of the present research are to upgrade the aerodynamic module from Laplace to Reynolds-Average Navier-Stokes equations solver. The advantages are many. First, the physical model allows accounting for viscous effects in the aerodynamic module. Second, the solution of the aero-icing module directly provides the means for characterizing the aerodynamic effects of icing, such as loss of lift and increased drag. Third, the use of a finite volume approach to solving the Partial Differential Equations allows rigorous mesh and time convergence analysis. Finally, the approaches developed in 2D can be easily transposed to 3D problems. The research was performed in three major steps, each providing insights into the overall numerical approaches. The most important realization comes from the need to develop specific mesh generation algorithms to ensure feasible solutions in very complex multi-step aero-icing calculations. The contributions are presented in chronological order of their realization. First, a new framework for RANS based two-dimensional ice accretion code, CANICE2D-NS, is developed. A multi-block RANS code from U. of Liverpool (named PMB) is providing the aerodynamic field using the Spalart-Allmaras turbulence model. The ICEM-CFD commercial tool is used for the iced airfoil remeshing and field smoothing. The new coupling is fully automated and capable of multi-step ice accretion simulations via a quasi-steady approach. In addition, the framework allows for flow analysis and aerodynamic performance prediction of the iced airfoils. The convergence of the quasi-steady algorithm is verified and identifies the need for an order of magnitude increase in the number of multi-time steps in icing simulations to achieve solver independent solutions. Second, a Multi-Block Navier-Stokes code, NSMB, is coupled with the CANICE2D icing framework. Attention is paid to the roughness implementation of the ONERA roughness model within the Spalart-Allmaras turbulence model, and to the convergence of the steady and quasi-steady iterative procedure. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases. The results of CANICE2D-NS show good agreement with experimental data both in terms of predicted ice shapes as well as aerodynamic analysis of predicted and experimental ice shapes. Third, an efficient single-block structured Navier-Stokes CFD code, NSCODE, is coupled with the CANICE2D-NS icing framework. Attention is paid to the roughness implementation of the Boeing model within the Spalart-Allmaras turbulence model, and to acceleration of the convergence of the steady and quasi-steady iterative procedures. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases, including code to code comparisons with the same framework coupled with the NSMB Navier-Stokes solver. The efficiency of the J-multigrid approach to solve the flow equations on complex iced geometries is demonstrated. Since it was noted in all these calculations that the ICEM-CFD grid generation package produced a number of issues such as inefficient mesh quality and smoothing deficiencies (notably grid shocks), a fourth study proposes a new mesh generation algorithm. A PDE based multi-block structured grid generation code, NSGRID, is developed for this purpose. The study includes the developments of novel mesh generation algorithms over complex glaze ice shapes containing multi-curvature ice accretion geometries, such as single/double ice horns. The twofold approaches tackle surface geometry discretization as well as field mesh generation. An adaptive curvilinear curvature control algorithm is constructed solving a 1D elliptic PDE equation with periodic source terms. This method controls the arclength grid spacing so that high convex and concave curvature regions around ice horns are appropriately captured and is shown to effectively treat the grid shock problem. Then, a novel blended method is developed by defining combinations of source terms with 2D elliptic equations. The source terms include two common control functions, Sorenson and Spekreijse, and an additional third source term to improve orthogonality. This blended method is shown to be very effective for improving grid quality metrics for complex glaze ice meshes with RANS resolution. The performance in terms of residual reduction per non-linear iteration of several solution algorithms (Point-Jacobi, Gauss-Seidel, ADI, Point and Line SOR) are discussed within the context of a full Multi-grid operator. Details are given on the various formulations used in the linearization process. It is shown that the performance of the solution algorithm depends on the type of control function used. Finally, the algorithms are validated on standard complex experimental ice shapes, demonstrating the applicability of the methods. Finally, the automated framework of RANS based two-dimensional multi-step ice accretion, CANICE2D-NS is developed, coupled with a Multi-Block Navier-Stokes CFD code, NSCODE2D, a Multi-Block elliptic grid generation code, NSGRID2D, and a Multi-Block Eulerian droplet solver, NSDROP2D (developed at Polytechnique Montreal). The framework allows Lagrangian and Eulerian droplet computations within a chimera approach treating multi-elements geometries. The code was tested on public and confidential validation test cases including standard NATO cases. In addition, up to 10 times speedup is observed in the mesh generation procedure by using the implicit line SOR and ADI smoothers within a multigrid procedure. The results demonstrate the benefits and robustness of the new framework in predicting ice shapes and aerodynamic performance parameters.

  6. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE PAGES

    Chen, Bo; Chen, Chen; Wang, Jianhui; ...

    2017-07-07

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  7. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bo; Chen, Chen; Wang, Jianhui

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  8. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  9. Simulating an underwater vehicle self-correcting guidance system with Simulink

    NASA Astrophysics Data System (ADS)

    Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe

    2008-09-01

    Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.

  10. CFD analysis of a solid oxide fuel cell with internal reforming: Coupled interactions of transport, heterogeneous catalysis and electrochemical processes

    NASA Astrophysics Data System (ADS)

    Janardhanan, Vinod M.; Deutschmann, Olaf

    Direct internal reforming in solid oxide fuel cell (SOFC) results in increased overall efficiency of the system. Present study focus on the chemical and electrochemical process in an internally reforming anode supported SOFC button cell running on humidified CH 4 (3% H 2 O). The computational approach employs a detailed multi-step model for heterogeneous chemistry in the anode, modified Butler-Volmer formalism for the electrochemistry and Dusty Gas Model (DGM) for the porous media transport. Two-dimensional elliptic model equations are solved for a button cell configuration. The electrochemical model assumes hydrogen as the only electrochemically active species. The predicted cell performances are compared with experimental reports. The results show that model predictions are in good agreement with experimental observation except the open circuit potentials. Furthermore, the steam content in the anode feed stream is found to have remarkable effect on the resulting overpotential losses and surface coverages of various species at the three-phase boundary.

  11. Fast Shear Compounding Using Robust Two-dimensional Shear Wave Speed Calculation and Multi-directional Filtering

    PubMed Central

    Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W.; Greenleaf, James F.; Chen, Shigao

    2014-01-01

    A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. using a robust two-dimensional (2D) shear wave speed calculation to reconstruct 2D shear elasticity maps from each filter direction; 4. compounding these 2D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view (FOV), 2D, and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. PMID:24613636

  12. Motivation and value influences in the relative balance of goal-directed and habitual behaviours in obsessive-compulsive disorder.

    PubMed

    Voon, V; Baek, K; Enander, J; Worbe, Y; Morris, L S; Harrison, N A; Robbins, T W; Rück, C; Daw, N

    2015-11-03

    Our decisions are based on parallel and competing systems of goal-directed and habitual learning, systems which can be impaired in pathological behaviours. Here we focus on the influence of motivation and compare reward and loss outcomes in subjects with obsessive-compulsive disorder (OCD) on model-based goal-directed and model-free habitual behaviours using the two-step task. We further investigate the relationship with acquisition learning using a one-step probabilistic learning task. Forty-eight OCD subjects and 96 healthy volunteers were tested on a reward and 30 OCD subjects and 53 healthy volunteers on the loss version of the two-step task. Thirty-six OCD subjects and 72 healthy volunteers were also tested on a one-step reversal task. OCD subjects compared with healthy volunteers were less goal oriented (model-based) and more habitual (model-free) to reward outcomes with a shift towards greater model-based and lower habitual choices to loss outcomes. OCD subjects also had enhanced acquisition learning to loss outcomes on the one-step task, which correlated with goal-directed learning in the two-step task. OCD subjects had greater stay behaviours or perseveration in the one-step task irrespective of outcome. Compulsion severity was correlated with habitual learning in the reward condition. Obsession severity was correlated with greater switching after loss outcomes. In healthy volunteers, we further show that greater reward magnitudes are associated with a shift towards greater goal-directed learning further emphasizing the role of outcome salience. Our results highlight an important influence of motivation on learning processes in OCD and suggest that distinct clinical strategies based on valence may be warranted.

  13. Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics.

    PubMed

    Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone

    2016-10-05

    Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics.

  14. Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics

    PubMed Central

    Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone

    2016-01-01

    Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics. PMID:27703141

  15. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  16. Heat Fluxes and Evaporation Measurements by Multi-Function Heat Pulse Probe: a Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Sharma, V.; Ciocca, F.; Hopmans, J. W.; Kamai, T.; Lunati, I.; Parlange, M. B.

    2012-04-01

    Multi Functional Heat Pulse Probes (MFHPP) are multi-needles probes developed in the last years able to measure temperature, thermal properties such as thermal diffusivity and volumetric heat capacity, from which soil moisture is directly retrieved, and electric conductivity (through a Wenner array). They allow the simultaneous measurement of coupled heat, water and solute transport in porous media, then. The use of only one instrument to estimate different quantities in the same volume and almost at the same time significantly reduces the need to interpolate different measurement types in space and time, increasing the ability to study the interdependencies characterizing the coupled transports, especially of water and heat, and water and solute. A three steps laboratory experiment is realized at EPFL to investigate the effectiveness and reliability of the MFHPP responses in a loamy soil from Conthey, Switzerland. In the first step specific calibration curves of volumetric heat capacity and thermal conductivity as function of known volumetric water content are obtained placing the MFHPP in small samplers filled with the soil homogeneously packed at different saturation degrees. The results are compared with literature values. In the second stage the ability of the MFHPP to measure heat fluxes is tested within a homemade thermally insulated calibration box and results are matched with those by two self-calibrating Heatflux plates (from Huxseflux), placed in the same box. In the last step the MFHPP are used to estimate the cumulative subsurface evaporation inside a small column (30 centimeters height per 8 centimeters inner diameter), placed on a scale, filled with the same loamy soil (homogeneously packed and then saturated) and equipped with a vertical array of four MFHPP inserted close to the surface. The subsurface evaporation is calculated from the difference between the net sensible heat and the net heat storage in the volume scanned by the probes, and the values obtained are matched with the overall evaporation, estimated through the scale in terms of weight loss. A numerical model able to solve the coupled heat-moisture diffusive equations is used to interpolate the obtained measures in the second and third step.

  17. Fast quantification of bovine milk proteins employing external cavity-quantum cascade laser spectroscopy.

    PubMed

    Schwaighofer, Andreas; Kuligowski, Julia; Quintás, Guillermo; Mayer, Helmut K; Lendl, Bernhard

    2018-06-30

    Analysis of proteins in bovine milk is usually tackled by time-consuming analytical approaches involving wet-chemical, multi-step sample clean-up procedures. The use of external cavity-quantum cascade laser (EC-QCL) based IR spectroscopy was evaluated as an alternative screening tool for direct and simultaneous quantification of individual proteins (i.e. casein and β-lactoglobulin) and total protein content in commercial bovine milk samples. Mid-IR spectra of protein standard mixtures were used for building partial least squares (PLS) regression models. A sample set comprising different milk types (pasteurized; differently processed extended shelf life, ESL; ultra-high temperature, UHT) was analysed and results were compared to reference methods. Concentration values of the QCL-IR spectroscopy approach obtained within several minutes are in good agreement with reference methods involving multiple sample preparation steps. The potential application as a fast screening method for estimating the heat load applied to liquid milk is demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  19. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  20. Evaluating fuzzy operators of an object-based image analysis for detecting landslides and their changes

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei

    2017-09-01

    This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.

  1. Validation of a multi-criteria evaluation model for animal welfare.

    PubMed

    Martín, P; Czycholl, I; Buxadé, C; Krieter, J

    2017-04-01

    The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.

  2. Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, John; Newkirk, Ryan; Mc Donald, Mark P

    2010-12-03

    The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk ofmore » intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and corrunodity flow models are instantiated, a risk consequence analysis can be performed by injecting contaminant at chosen points in the system and propagating the event through the overarching system to arrive at morbidity and mortality figures. A generic chocolate snack cake model, consisting of fluid milk, liquid eggs, and cocoa, is described as an intended proof of concept for multi-ingredient food systems. We aim for an eventual tool that can be used directly by policy makers and planners.« less

  3. The effects of multi-disciplinary psycho-social care on socio-economic problems in cancer patients: a cluster-randomized trial.

    PubMed

    Singer, Susanne; Roick, Julia; Meixensberger, Jürgen; Schiefke, Franziska; Briest, Susanne; Dietz, Andreas; Papsdorf, Kirsten; Mössner, Joachim; Berg, Thomas; Stolzenburg, Jens-Uwe; Niederwieser, Dietger; Keller, Annette; Kersting, Anette; Danker, Helge

    2018-06-01

    We examined whether multi-disciplinary stepped psycho-social care decreases financial problems and improves return-to-work in cancer patients. In a university hospital, wards were randomly allocated to either stepped or standard care. Stepped care comprised screening for financial problems, consultation between doctor and patient, and the provision of social service. Outcomes were financial problems at the time of discharge and return-to-work in patients < 65 years old half a year after baseline. The analysis employed mixed-effect multivariate regression modeling. Thirteen wards were randomized and 1012 patients participated (n = 570 in stepped care and n = 442 in standard care). Those who reported financial problems at baseline were less likely to have financial problems at discharge when they had received stepped care (odds ratio (OR) 0.2, 95% confidence interval (CI) 0.1, 0.7; p = 0.01). There was no evidence for an effect of stepped care on financial problems in patients without such problems at baseline (OR 1.1, CI 0.5, 2.6; p = 0.82). There were 399 patients < 65 years old who were not retired at baseline. In this group, there was no evidence for an effect of stepped care on being employed half a year after baseline (OR 0.7, CI 0.3, 2.0; p = 0.52). NCT01859429 CONCLUSIONS: Financial problems can be avoided more effectively with multi-disciplinary stepped psycho-social care than with standard care in patients who have such problems.

  4. A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags

    NASA Astrophysics Data System (ADS)

    Meng, S.; Xie, X.

    2015-12-01

    In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.

  5. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  6. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).

  7. Application of retention modelling to the simulation of separation of organic anions in suppressed ion chromatography.

    PubMed

    Zakaria, Philip; Dicinoski, Greg W; Ng, Boon Khing; Shellie, Robert A; Hanna-Brown, Melissa; Haddad, Paul R

    2009-09-18

    The ion-exchange separation of organic anions of varying molecular mass has been demonstrated using ion chromatography with isocratic, gradient and multi-step eluent profiles on commercially available columns with UV detection. A retention model derived previously for inorganic ions and based solely on electrostatic interactions between the analytes and the stationary phase was applied. This model was found to accurately describe the observed elution of all the anions under isocratic, gradient and multi-step eluent conditions. Hydrophobic interactions, although likely to be present to varying degrees, did not limit the applicability of the ion-exchange retention model. Various instrumental configurations were investigated to overcome problems associated with the use of organic modifiers in the eluent which caused compatibility issues with the electrolytically derived, and subsequently suppressed, eluent. The preferred configuration allowed the organic modifier stream to bypass the eluent generator, followed by subsequent mixing before entering the injection valve and column. Accurate elution prediction was achieved even when using 5-step eluent profiles with errors in retention time generally being less than 1% relative standard deviation (RSD) and all being less than 5% RSD. Peak widths for linear gradient separations were also modelled and showed good agreement with experimentally determined values.

  8. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  9. Individualized In-Service Teacher Education. (Project IN-STEP). Evaluation Report, Phase II.

    ERIC Educational Resources Information Center

    Thurber, John C.

    Phase 2 of Project IN-STEP was conducted to revise, refine, and conduct further field testing of a new inservice teacher education model. The method developed (in Phase 1--see ED 003 905 for report) is an individualized, multi-media approach. Revision activities, based on feedback provided for Phase 1, include the remaking of six videotape…

  10. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  11. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-07

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  12. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  13. A novel framework for change detection in bi-temporal polarimetric SAR images

    NASA Astrophysics Data System (ADS)

    Pirrone, Davide; Bovolo, Francesca; Bruzzone, Lorenzo

    2016-10-01

    Last years have seen relevant increase of polarimetric Synthetic Aperture Radar (SAR) data availability, thanks to satellite sensors like Sentinel-1 or ALOS-2 PALSAR-2. The augmented information lying in the additional polarimetric channels represents a possibility for better discriminate different classes of changes in change detection (CD) applications. This work aims at proposing a framework for CD in multi-temporal multi-polarization SAR data. The framework includes both a tool for an effective visual representation of the change information and a method for extracting the multiple-change information. Both components are designed to effectively handle the multi-dimensionality of polarimetric data. In the novel representation, multi-temporal intensity SAR data are employed to compute a polarimetric log-ratio. The multitemporal information of the polarimetric log-ratio image is represented in a multi-dimensional features space, where changes are highlighted in terms of magnitude and direction. This representation is employed to design a novel unsupervised multi-class CD approach. This approach considers a sequential two-step analysis of the magnitude and the direction information for separating non-changed and changed samples. The proposed approach has been validated on a pair of Sentinel-1 data acquired before and after the flood in Tamil-Nadu in 2015. Preliminary results demonstrate that the representation tool is effective and that the use of polarimetric SAR data is promising in multi-class change detection applications.

  14. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  15. The Wageningen Lowland Runoff Simulator (WALRUS): a Novel Open Source Rainfall-Runoff Model for Areas with Shallow Groundwater

    NASA Astrophysics Data System (ADS)

    Brauer, C.; Teuling, R.; Torfs, P.; Uijlenhoet, R.

    2014-12-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models which are often used in lowland regions and simple, parametric models which have mostly been developed for mountainous catchments. This parametric rainfall-runoff model can be used all over the world, both in freely draining lowland catchments and polders with controlled water levels. Here, we present the model implementation and our recent experience in training students and practitioners to use the model. WALRUS has several advantages that facilitate practical application. Firstly, WALRUS is computationally efficient, which allows for operational forecasting and uncertainty estimation by running ensembles. Secondly, the code is set-up such that it can be used by both practitioners and researchers. For direct use by practitioners, defaults are implemented for relations between model variables and for the computation of initial conditions based on discharge only, leaving only four parameters which require calibration. For research purposes, the defaults can easily be changed. Finally, an approach for flexible time steps increases numerical stability and makes model parameter values independent of time step size, which facilitates use of the model with the same parameter set for multi-year water balance studies as well as detailed analyses of individual flood peaks. The open source model code is currently implemented in R and compiled into a package. This package will be made available through the R CRAN server. A small massive open online course (MOOC) is being developed to give students, researchers and practitioners a step-by-step WALRUS-training. This course contains explanations about model elements and its advantages and limitations, as well as hands-on exercises to learn how to use WALRUS. All code, course, literature and examples will be collected on a dedicated website, which can be found via www.wageningenur.nl/hwm. References C.C. Brauer, et al. (2014a). Geosci. Model Dev. Discuss., 7, 1357—1411. C.C. Brauer, et al. (2014b). Hydrol. Earth Syst. Sci. Discuss., 11, 2091—2148.

  16. Multi-Decadal Variability in the Bering Sea: A Synthesis of Model Results and Observations from 1948 to the Present

    DTIC Science & Technology

    2013-12-01

    stated that the development and use of high-resolution Arctic climate and systems models are important stepping stones for dedicated studies of...W., J. L. Clement Kinney, D. C. Marble , and J. Jakacki, 2008: Towards eddy resolving models of the Arctic Ocean: Ocean Modeling in an Eddying

  17. Long range personalized cancer treatment strategies incorporating evolutionary dynamics.

    PubMed

    Yeang, Chen-Hsiang; Beckman, Robert A

    2016-10-22

    Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.

  18. Synergistically combining Optical and Thermal radiative transfer modelswithin the EO-LDAS data assimilation framework to estimate land surfaceand component temperatures from MODIS and Sentinel-3

    NASA Astrophysics Data System (ADS)

    Timmermans, J.; Gomez-Dans, J. L.; Verhoef, W.; Tol, C. V. D.; Lewis, P.

    2017-12-01

    Evapotranspiration (ET) cannot be directly measured from space. Instead it relies on modelling approaches that use several land surface parameters (LSP), LAI and LST, in conjunction with meteorological parameters. Such a modelling approach presents two caveats: the validity of the model, and the consistency between the different input parameters. Often this second step is not considered, ignoring that without good inputs no decent output can provided. When LSP- dynamics contradict each other, the output of the model cannot be representative of reality. At present however, the LSPs used in large scale ET estimations originate from different single-sensor retrieval-approaches and even from different satellite sensors. In response, the Earth Observation Land Data Assimilation System (EOLDAS) was developed. EOLDAS uses a multi-sensor approach to couple different satellite observations/types to radiative transfer models (RTM), consistently. It is therefore capable of synergistically estimating a variety of LSPs. Considering that ET is most sensitive to the temperatures of the land surface (components), the goal of this research is to expand EOLDAS to the thermal domain. This research not only focuses on estimating LST, but also on retrieving (soil/vegetation, Sunlit/shaded) component temperatures, to facilitate dual/quad-source ET models. To achieve this, The Soil Canopy Observations of Photosynthesis and Energy (SCOPE) model was integrated into EOLDAS. SCOPE couples key-parameters to key-processes, such as photosynthesis, ET and optical/thermal RT. In this research SCOPE was also coupled to MODTRAN RTM, in order to estimate BOA component temperatures directly from TOA observations. This paper presents the main modelling steps of integrating these complex models into an operational platform. In addition it highlights the actual retrieval using different satellite observations, such as MODIS and Sentinel-3, and meteorological variables from the ERA-Interim.

  19. A novel design solution to the fraenal notch of maxillary dentures.

    PubMed

    White, J A P; Bond, I P; Jagger, D C

    2013-09-01

    This study investigates a novel design feature for the fraenal notch of maxillary dentures, using computational and experimental methods, and shows that its use could significantly increase the longevity of the prosthesis. A two-step process can be used to create the design feature with current denture base materials, but would be highly dependent on the individual skill of the dental technician. Therefore, an alternative form of manufacture, multi-material additive layer manufacture (or '3D printing'), has been proposed as a future method for the direct production of complete dentures with multi-material design features.

  20. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  1. Multi-electrolyte-step anodic aluminum oxide method for the fabrication of self-organized nanochannel arrays

    PubMed Central

    2012-01-01

    Nanochannel arrays were fabricated by the self-organized multi-electrolyte-step anodic aluminum oxide [AAO] method in this study. The anodization conditions used in the multi-electrolyte-step AAO method included a phosphoric acid solution as the electrolyte and an applied high voltage. There was a change in the phosphoric acid by the oxalic acid solution as the electrolyte and the applied low voltage. This method was used to produce self-organized nanochannel arrays with good regularity and circularity, meaning less power loss and processing time than with the multi-step AAO method. PMID:22333268

  2. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    NASA Astrophysics Data System (ADS)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  3. The galactic contribution to IceCube's astrophysical neutrino flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denton, Peter B.; Marfatia, Danny; Weiler, Thomas J., E-mail: peterbd1@gmail.com, E-mail: dmarf8@hawaii.edu, E-mail: tom.weiler@vanderbilt.edu

    2017-08-01

    High energy neutrinos have been detected by IceCube, but their origin remains a mystery. Determining the sources of this flux is a crucial first step towards multi-messenger studies. In this work we systematically compare two classes of sources with the data: galactic and extragalactic. We assume that the neutrino sources are distributed according to a class of Galactic models. We build a likelihood function on an event by event basis including energy, event topology, absorption, and direction information. We present the probability that each high energy event with deposited energy E {sub dep}>60 TeV in the HESE sample is Galactic,more » extragalactic, or background. For Galactic models considered the Galactic fraction of the astrophysical flux has a best fit value of 1.3% and is <9.5% at 90% CL. A zero Galactic flux is allowed at <1σ.« less

  4. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  5. Feasibility study of multi-pixel retrieval of optical thickness and droplet effective radius of inhomogeneous clouds using deep learning

    NASA Astrophysics Data System (ADS)

    Okamura, Rintaro; Iwabuchi, Hironobu; Schmidt, K. Sebastian

    2017-12-01

    Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.

  6. A piezoelectric six-DOF vibration energy harvester based on parallel mechanism: dynamic modeling, simulation, and experiment

    NASA Astrophysics Data System (ADS)

    Yuan, G.; Wang, D. H.

    2017-03-01

    Multi-directional and multi-degree-of-freedom (multi-DOF) vibration energy harvesting are attracting more and more research interest in recent years. In this paper, the principle of a piezoelectric six-DOF vibration energy harvester based on parallel mechanism is proposed to convert the energy of the six-DOF vibration to single-DOF vibrations of the limbs on the energy harvester and output voltages. The dynamic model of the piezoelectric six-DOF vibration energy harvester is established to estimate the vibrations of the limbs. On this basis, a Stewart-type piezoelectric six-DOF vibration energy harvester is developed and explored. In order to validate the established dynamic model and the analysis results, the simulation model of the Stewart-type piezoelectric six-DOF vibration energy harvester is built and tested with different vibration excitations by SimMechanics, and some preliminary experiments are carried out. The results show that the vibration of the limbs on the piezoelectric six-DOF vibration energy harvester can be estimated by the established dynamic model. The developed Stewart-type piezoelectric six-DOF vibration energy harvester can harvest the energy of multi-directional linear vibration and multi-axis rotating vibration with resonance frequencies of 17 Hz, 25 Hz, and 47 Hz. Moreover, the resonance frequencies of the developed piezoelectric six-DOF vibration energy harvester are not affected by the direction changing of the vibration excitation.

  7. Technical note: 3-hourly temporal downscaling of monthly global terrestrial biosphere model net ecosystem exchange

    DOE PAGES

    Fisher, Joshua B.; Sikka, Munish; Huntzinger, Deborah N.; ...

    2016-07-29

    Here, the land surface provides a boundary condition to atmospheric forward and flux inversion models. These models require prior estimates of CO 2 fluxes at relatively high temporal resolutions (e.g., 3-hourly) because of the high frequency of atmospheric mixing and wind heterogeneity. However, land surface model CO 2 fluxes are often provided at monthly time steps, typically because the land surface modeling community focuses more on time steps associated with plant phenology (e.g., seasonal) than on sub-daily phenomena. Here, we describe a new dataset created from 15 global land surface models and 4 ensemble products in the Multi-scale Synthesis andmore » Terrestrial Model Intercomparison Project (MsTMIP), temporally downscaled from monthly to 3-hourly output. We provide 3-hourly output for each individual model over 7 years (2004–2010), as well as an ensemble mean, a weighted ensemble mean, and the multi-model standard deviation. Output is provided in three different spatial resolutions for user preferences: 0.5° × 0.5°, 2.0° × 2.5°, and 4.0° × 5.0° (latitude × longitude).« less

  8. Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.

    2016-04-01

    This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.

  9. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  10. Research Advances on Radiation Transfer Modeling and Inversion for Multi-Scale Land Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Liu, Q.

    2011-09-01

    At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.

  11. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  12. L1 Adaptive Control Augmentation System with Application to the X-29 Lateral/Directional Dynamics: A Multi-Input Multi-Output Approach

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Burken, John J.; Xargay, Enric

    2010-01-01

    This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.

  13. A 3D numerical study of LO2/GH2 supercritical combustion in the ONERA-Mascotte Test-rig configuration

    NASA Astrophysics Data System (ADS)

    Benmansour, Abdelkrim; Liazid, Abdelkrim; Logerais, Pierre-Olivier; Durastanti, Jean-Félix

    2016-02-01

    Cryogenic propellants LOx/H2 are used at very high pressure in rocket engine combustion. The description of the combustion process in such application is very complex due essentially to the supercritical regime. Ideal gas law becomes invalid. In order to try to capture the average characteristics of this combustion process, numerical computations are performed using a model based on a one-phase multi-component approach. Such work requires fluid properties and a correct definition of the mixture behavior generally described by cubic equations of state with appropriated thermodynamic relations validated against the NIST data. In this study we consider an alternative way to get the effect of real gas by testing the volume-weighted-mixing-law with association of the component transport properties using directly the NIST library data fitting including the supercritical regime range. The numerical simulations are carried out using 3D RANS approach associated with two tested turbulence models, the standard k-Epsilon model and the realizable k-Epsilon one. The combustion model is also associated with two chemical reaction mechanisms. The first one is a one-step generic chemical reaction and the second one is a two-step chemical reaction. The obtained results like temperature profiles, recirculation zones, visible flame lengths and distributions of OH species are discussed.

  14. Towards a first implementation of the WLIMES approach in living system studies advancing the diagnostics and therapy in augmented personalized medicine.

    PubMed

    Simeonov, Plamen L

    2017-12-01

    The goal of this paper is to advance an extensible theory of living systems using an approach to biomathematics and biocomputation that suitably addresses self-organized, self-referential and anticipatory systems with multi-temporal multi-agents. Our first step is to provide foundations for modelling of emergent and evolving dynamic multi-level organic complexes and their sustentative processes in artificial and natural life systems. Main applications are in life sciences, medicine, ecology and astrobiology, as well as robotics, industrial automation, man-machine interface and creative design. Since 2011 over 100 scientists from a number of disciplines have been exploring a substantial set of theoretical frameworks for a comprehensive theory of life known as Integral Biomathics. That effort identified the need for a robust core model of organisms as dynamic wholes, using advanced and adequately computable mathematics. The work described here for that core combines the advantages of a situation and context aware multivalent computational logic for active self-organizing networks, Wandering Logic Intelligence (WLI), and a multi-scale dynamic category theory, Memory Evolutive Systems (MES), hence WLIMES. This is presented to the modeller via a formal augmented reality language as a first step towards practical modelling and simulation of multi-level living systems. Initial work focuses on the design and implementation of this visual language and calculus (VLC) and its graphical user interface. The results will be integrated within the current methodology and practices of theoretical biology and (personalized) medicine to deepen and to enhance the holistic understanding of life. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Fast shear compounding using robust 2-D shear wave speed calculation and multi-directional filtering.

    PubMed

    Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W; Greenleaf, James F; Chen, Shigao

    2014-06-01

    A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. Applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. Decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. Using a robust 2-D shear wave speed calculation to reconstruct 2-D shear elasticity maps from each filter direction; and 4. Compounding these 2-D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view, 2-D and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  16. A Quantitative Tunneling/Desorption Model for the Exchange Current at the Porous Electrode/Beta - Alumina/Alkali Metal Gas Three Phase Zone at 700-1300K

    NASA Technical Reports Server (NTRS)

    Williams, R. M.; Ryan, M. A.; Saipetch, C.; LeDuc, H. G.

    1996-01-01

    The exchange current observed at porous metal electrodes on sodium or potassium beta -alumina solid electrolytes in alkali metal vapor is quantitatively modeled with a multi-step process with good agreement with experimental results.

  17. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  18. Optimization of airport security lanes

    NASA Astrophysics Data System (ADS)

    Chen, Lin

    2018-05-01

    Current airport security management system is widely implemented all around the world to ensure the safety of passengers, but it might not be an optimum one. This paper aims to seek a better security system, which can maximize security while minimize inconvenience to passengers. Firstly, we apply Petri net model to analyze the steps where the main bottlenecks lie. Based on average tokens and time transition, the most time-consuming steps of security process can be found, including inspection of passengers' identification and documents, preparing belongings to be scanned and the process for retrieving belongings back. Then, we develop a queuing model to figure out factors affecting those time-consuming steps. As for future improvement, the effective measures which can be taken include transferring current system as single-queuing and multi-served, intelligently predicting the number of security checkpoints supposed to be opened, building up green biological convenient lanes. Furthermore, to test the theoretical results, we apply some data to stimulate the model. And the stimulation results are consistent with what we have got through modeling. Finally, we apply our queuing model to a multi-cultural background. The result suggests that by quantifying and modifying the variance in wait time, the model can be applied to individuals with various habits customs and habits. Generally speaking, our paper considers multiple affecting factors, employs several models and does plenty of calculations, which is practical and reliable for handling in reality. In addition, with more precise data available, we can further test and improve our models.

  19. Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Reid, Terry V.

    2016-01-01

    One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.

  20. Abundance and composition of indigenous bacterial communities in a multi-step biofiltration-based drinking water treatment plant.

    PubMed

    Lautenschlager, Karin; Hwang, Chiachi; Ling, Fangqiong; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Egli, Thomas; Hammes, Frederik

    2014-10-01

    Indigenous bacterial communities are essential for biofiltration processes in drinking water treatment systems. In this study, we examined the microbial community composition and abundance of three different biofilter types (rapid sand, granular activated carbon, and slow sand filters) and their respective effluents in a full-scale, multi-step treatment plant (Zürich, CH). Detailed analysis of organic carbon degradation underpinned biodegradation as the primary function of the biofilter biomass. The biomass was present in concentrations ranging between 2-5 × 10(15) cells/m(3) in all filters but was phylogenetically, enzymatically and metabolically diverse. Based on 16S rRNA gene-based 454 pyrosequencing analysis for microbial community composition, similar microbial taxa (predominantly Proteobacteria, Planctomycetes, Acidobacteria, Bacteriodetes, Nitrospira and Chloroflexi) were present in all biofilters and in their respective effluents, but the ratio of microbial taxa was different in each filter type. This change was also reflected in the cluster analysis, which revealed a change of 50-60% in microbial community composition between the different filter types. This study documents the direct influence of the filter biomass on the microbial community composition of the final drinking water, particularly when the water is distributed without post-disinfection. The results provide new insights on the complexity of indigenous bacteria colonizing drinking water systems, especially in different biofilters of a multi-step treatment plant. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Models of Alcohol and Other Drug Treatment for Consideration When Working with Deaf and Hard of Hearing Individuals.

    ERIC Educational Resources Information Center

    Guthmann, Debra

    This paper discusses several models for treating chemical dependency in individuals who are deaf or hard of hearing. It begins by describing the 12-step model, a comprehensive, multi-disciplinary approach to the treatment of addiction which is abstinence oriented and based on the principles of Alcoholics Anonymous. This model includes group…

  2. Direct construction of predictive models for describing growth Salmonella enteritidis in liquid eggs – a one-step approach

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to develop a new approach using a one-step approach to directly construct predictive models for describing the growth of Salmonella Enteritidis (SE) in liquid egg white (LEW) and egg yolk (LEY). A five-strain cocktail of SE, induced to resist rifampicin at 100 mg/L, ...

  3. A Parallel, Multi-Scale Watershed-Hydrologic-Inundation Model with Adaptively Switching Mesh for Capturing Flooding and Lake Dynamics

    NASA Astrophysics Data System (ADS)

    Ji, X.; Shen, C.

    2017-12-01

    Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.

  4. Optimization of a Radiative Transfer Forward Operator for Simulating SMOS Brightness Temperatures over the Upper Mississippi Basin, USA

    NASA Technical Reports Server (NTRS)

    Lievens, H.; Verhoest, N. E. C.; Martens, B.; VanDenBerg, M. J.; Bitar, A. Al; Tomer, S. Kumar; Merlin, O.; Cabot, F.; Kerr, Y.; DeLannoy, G. J. M.; hide

    2014-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite mission is routinely providing global multi-angular observations of brightness temperature (TB) at both horizontal and vertical polarization with a 3-day repeat period. The assimilation of such data into a land surface model (LSM) may improve the skill of operational flood forecasts through an improved estimation of soil moisture (SM). To accommodate for the direct assimilation of the SMOS TB data, the LSM needs to be coupled with a radiative transfer model (RTM), serving as a forward operator for the simulation of multi-angular and multi-polarization top of atmosphere TBs. This study investigates the use of the Variable Infiltration Capacity (VIC) LSM coupled with the Community Microwave Emission Modelling platform (CMEM) for simulating SMOS TB observations over the Upper Mississippi basin, USA. For a period of 2 years (2010-2011), a comparison between SMOS TBs and simulations with literature-based RTM parameters reveals a basin averaged bias of 30K. Therefore, time series of SMOS TB observations are used to investigate ways for mitigating these large biases. Specifically, the study demonstrates the impact of the LSM soil moisture climatology in the magnitude of TB biases. After CDF matching the SM climatology of the LSM to SMOS retrievals, the average bias decreases from 30K to less than 5K. Further improvements can be made through calibration of RTM parameters related to the modeling of surface roughness and vegetation. Consequently, it can be concluded that SM rescaling and RTM optimization are efficient means for mitigating biases and form a necessary preparatory step for data assimilation.

  5. Development of a multi-criteria evaluation system to assess growing pig welfare.

    PubMed

    Martín, P; Traulsen, I; Buxadé, C; Krieter, J

    2017-03-01

    The aim of this paper was to present an alternative multi-criteria evaluation model to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. The WQ assessment protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first step of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion. The utility functions and the aggregation function were constructed in two separated steps. The Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) method was used for utility function determination and the Choquet Integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The methods were tested with generated data sets for farms of growing pigs. Using the MAUT, similar results were obtained to the ones obtained applying the WQ protocol aggregation methods. It can be concluded that due to the use of an interactive approach such as MACBETH, this alternative methodology is more transparent and more flexible than the methodology proposed by WQ, which allows the possibility to modify the model according, for instance, to new scientific knowledge.

  6. Design of multi-phase dynamic chemical networks

    NASA Astrophysics Data System (ADS)

    Chen, Chenrui; Tan, Junjun; Hsieh, Ming-Chien; Pan, Ting; Goodwin, Jay T.; Mehta, Anil K.; Grover, Martha A.; Lynn, David G.

    2017-08-01

    Template-directed polymerization reactions enable the accurate storage and processing of nature's biopolymer information. This mutualistic relationship of nucleic acids and proteins, a network known as life's central dogma, is now marvellously complex, and the progressive steps necessary for creating the initial sequence and chain-length-specific polymer templates are lost to time. Here we design and construct dynamic polymerization networks that exploit metastable prion cross-β phases. Mixed-phase environments have been used for constructing synthetic polymers, but these dynamic phases emerge naturally from the growing peptide oligomers and create environments suitable both to nucleate assembly and select for ordered templates. The resulting templates direct the amplification of a phase containing only chain-length-specific peptide-like oligomers. Such multi-phase biopolymer dynamics reveal pathways for the emergence, self-selection and amplification of chain-length- and possibly sequence-specific biopolymers.

  7. A novel integrated approach for path following and directional stability control of road vehicles after a tire blow-out

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Chen, Hong; Guo, Konghui; Cao, Dongpu

    2017-09-01

    The path following and directional stability are two crucial problems when a road vehicle experiences a tire blow-out or sudden tire failure. Considering the requirement of rapid road vehicle motion control during a tire blow-out, this article proposes a novel linearized decoupling control procedure with three design steps for a class of second order multi-input-multi-output non-affine system. The evaluating indicators for controller performance are presented and a performance related control parameter distribution map is obtained based on the stochastic algorithm which is an innovation for non-blind parameter adjustment in engineering implementation. The analysis on the robustness of the proposed integrated controller is also performed. The simulation studies for a range of driving conditions are conducted, to demonstrate the effectiveness of the proposed controller.

  8. A road map for multi-way calibration models.

    PubMed

    Escandar, Graciela M; Olivieri, Alejandro C

    2017-08-07

    A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.

  9. Multi-phase-field modeling of anisotropic crack propagation for polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanh-Tung; Réthoré, Julien; Yvonnet, Julien; Baietto, Marie-Christine

    2017-08-01

    A new multi-phase-field method is developed for modeling the fracture of polycrystals at the microstructural level. Inter and transgranular cracking, as well as anisotropic effects of both elasticity and preferential cleavage directions within each randomly oriented crystal are taken into account. For this purpose, the proposed phase field formulation includes: (a) a smeared description of grain boundaries as cohesive zones avoiding defining an additional phase for grains; (b) an anisotropic phase field model; (c) a multi-phase field formulation where each preferential cleavage direction is associated with a damage (phase field) variable. The obtained framework allows modeling interactions and competition between grains and grain boundary cracks, as well as their effects on the effective response of the material. The proposed model is illustrated through several numerical examples involving a full description of complex crack initiation and propagation within 2D and 3D models of polycrystals.

  10. Hypersonic Vehicle Propulsion System Simplified Model Development

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  11. Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2006-01-17

    The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.

  12. Research Advances on Radiation Transfer Modeling and Inversion for Multi-scale Land Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.

    2011-12-01

    As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.

  13. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  14. Incorporating evolution of transcription factor binding sites into annotated alignments.

    PubMed

    Bais, Abha S; Grossmann, Stefen; Vingron, Martin

    2007-08-01

    Identifying transcription factor binding sites (TFBSs) is essential to elucidate putative regulatory mechanisms. A common strategy is to combine cross-species conservation with single sequence TFBS annotation to yield "conserved TFBSs". Most current methods in this field adopt a multi-step approach that segregates the two aspects. Again, it is widely accepted that the evolutionary dynamics of binding sites differ from those of the surrounding sequence. Hence, it is desirable to have an approach that explicitly takes this factor into account. Although a plethora of approaches have been proposed for the prediction of conserved TFBSs, very few explicitly model TFBS evolutionary properties, while additionally being multi-step. Recently, we introduced a novel approach to simultaneously align and annotate conserved TFBSs in a pair of sequences. Building upon the standard Smith-Waterman algorithm for local alignments, SimAnn introduces additional states for profiles to output extended alignments or annotated alignments. That is, alignments with parts annotated as gaplessly aligned TFBSs (pair-profile hits)are generated. Moreover,the pair- profile related parameters are derived in a sound statistical framework. In this article, we extend this approach to explicitly incorporate evolution of binding sites in the SimAnn framework. We demonstrate the extension in the theoretical derivations through two position-specific evolutionary models, previously used for modelling TFBS evolution. In a simulated setting, we provide a proof of concept that the approach works given the underlying assumptions,as compared to the original work. Finally, using a real dataset of experimentally verified binding sites in human-mouse sequence pairs,we compare the new approach (eSimAnn) to an existing multi-step tool that also considers TFBS evolution. Although it is widely accepted that binding sites evolve differently from the surrounding sequences, most comparative TFBS identification methods do not explicitly consider this.Additionally, prediction of conserved binding sites is carried out in a multi-step approach that segregates alignment from TFBS annotation. In this paper, we demonstrate how the simultaneous alignment and annotation approach of SimAnn can be further extended to incorporate TFBS evolutionary relationships. We study how alignments and binding site predictions interplay at varying evolutionary distances and for various profile qualities.

  15. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  16. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  17. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  18. Genetic parameter estimation for pre- and post-weaning traits in Brahman cattle in Brazil.

    PubMed

    Vargas, Giovana; Buzanskas, Marcos Eli; Guidolin, Diego Gomes Freire; Grossi, Daniela do Amaral; Bonifácio, Alexandre da Silva; Lôbo, Raysildo Barbosa; da Fonseca, Ricardo; Oliveira, João Ademir de; Munari, Danísio Prado

    2014-10-01

    Beef cattle producers in Brazil use body weight traits as breeding program selection criteria due to their great economic importance. The objectives of this study were to evaluate different animal models, estimate genetic parameters, and define the most fitting model for Brahman cattle body weight standardized at 120 (BW120), 210 (BW210), 365 (BW365), 450 (BW450), and 550 (BW550) days of age. To estimate genetic parameters, single-, two-, and multi-trait analyses were performed using the animal model. The likelihood ratio test was verified between all models. For BW120 and BW210, additive direct genetic, maternal genetic, maternal permanent environment, and residual effects were considered, while for BW365 and BW450, additive direct genetic, maternal genetic, and residual effects were considered. Finally, for BW550, additive direct genetic and residual effects were considered. Estimates of direct heritability for BW120 were similar in all analyses; however, for the other traits, multi-trait analysis resulted in higher estimates. The maternal heritability and proportion of maternal permanent environmental variance to total variance were minimal in multi-trait analyses. Genetic, environmental, and phenotypic correlations were of high magnitude between all traits. Multi-trait analyses would aid in the parameter estimation for body weight at older ages because they are usually affected by a lower number of animals with phenotypic information due to culling and mortality.

  19. In situ UV curable 3D printing of multi-material tri-legged soft bot with spider mimicked multi-step forward dynamic gait

    NASA Astrophysics Data System (ADS)

    Zeb Gul, Jahan; Yang, Bong-Su; Yang, Young Jin; Chang, Dong Eui; Choi, Kyung Hyun

    2016-11-01

    Soft bots have the expedient ability of adopting intricate postures and fitting in complex shapes compared to mechanical robots. This paper presents a unique in situ UV curing three-dimensional (3D) printed multi-material tri-legged soft bot with spider mimicked multi-step dynamic forward gait using commercial bio metal filament (BMF) as an actuator. The printed soft bot can produce controllable forward motion in response to external signals. The fundamental properties of BMF, including output force, contractions at different frequencies, initial loading rate, and displacement-rate are verified. The tri-pedal soft bot CAD model is designed inspired by spider’s legged structure and its locomotion is assessed by simulating strain and displacement using finite element analysis. A customized rotational multi-head 3D printing system assisted with multiple wavelength’s curing lasers is used for in situ fabrication of tri-pedal soft-bot using two flexible materials (epoxy and polyurethane) in three layered steps. The size of tri-pedal soft-bot is 80 mm in diameter and each pedal’s width and depth is 5 mm × 5 mm respectively. The maximum forward speed achieved is 2.7 mm s-1 @ 5 Hz with input voltage of 3 V and 250 mA on a smooth surface. The fabricated tri-pedal soft bot proved its power efficiency and controllable locomotion at three input signal frequencies (1, 2, 5 Hz).

  20. [Safety culture: definition, models and design].

    PubMed

    Pfaff, Holger; Hammer, Antje; Ernstmann, Nicole; Kowalski, Christoph; Ommen, Oliver

    2009-01-01

    Safety culture is a multi-dimensional phenomenon. Safety culture of a healthcare organization is high if it has a common stock in knowledge, values and symbols in regard to patients' safety. The article intends to define safety culture in the first step and, in the second step, demonstrate the effects of safety culture. We present the model of safety behaviour and show how safety culture can affect behaviour and produce safe behaviour. In the third step we will look at the causes of safety culture and present the safety-culture-model. The main hypothesis of this model is that the safety culture of a healthcare organization strongly depends on its communication culture and its social capital. Finally, we will investigate how the safety culture of a healthcare organization can be improved. Based on the safety culture model six measures to improve safety culture will be presented.

  1. Seeing the wood for the trees: a forest of methods for optimization and omic-network integration in metabolic modelling.

    PubMed

    Vijayakumar, Supreeta; Conway, Max; Lió, Pietro; Angione, Claudio

    2017-05-30

    Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a 'forest' of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridize and become more complex. We then provide the first hands-on tutorial for multi-objective optimization of metabolic models in R. We finally discuss the implementation of multi-view machine learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimization of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning and a step-by-step R tutorial should be of interest for both the beginner and the advanced user. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Multi-Cone Model for Estimating GPS Ionospheric Delays

    NASA Technical Reports Server (NTRS)

    Sparks, Lawrence; Komjathy, Attila; Mannucci, Anthony

    2009-01-01

    The multi-cone model is a computational model for estimating ionospheric delays of Global Positioning System (GPS) signals. It is a direct descendant of the conical-domain model. A primary motivation for the development of this model is the need to find alternatives for modeling slant delays at low latitudes, where ionospheric behavior poses an acute challenge for GPS signal-delay estimates based upon the thin-shell model of the ionosphere.

  3. ALPHA: A Case Study in Upgrading.

    ERIC Educational Resources Information Center

    Granick, Leonard P. R.; And Others

    An industry-focused upgrading model, based upon job redesigns of entry-level and higher skill positions and a multi-step diagonal/vertical progression ladder was installed in a company having a 150-employee blue collar work force. The model provided for rapid promotion and wage increases of both present employees and new hires, supported by skills…

  4. Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty

    ERIC Educational Resources Information Center

    Cole, Robert

    2013-01-01

    Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…

  5. Multi-modal two-step floating catchment area analysis of primary health care accessibility.

    PubMed

    Langford, Mitchel; Higgs, Gary; Fry, Richard

    2016-03-01

    Two-step floating catchment area (2SFCA) techniques are popular for measuring potential geographical accessibility to health care services. This paper proposes methodological enhancements to increase the sophistication of the 2SFCA methodology by incorporating both public and private transport modes using dedicated network datasets. The proposed model yields separate accessibility scores for each modal group at each demand point to better reflect the differential accessibility levels experienced by each cohort. An empirical study of primary health care facilities in South Wales, UK, is used to illustrate the approach. Outcomes suggest the bus-riding cohort of each census tract experience much lower accessibility levels than those estimated by an undifferentiated (car-only) model. Car drivers' accessibility may also be misrepresented in an undifferentiated model because they potentially profit from the lower demand placed upon service provision points by bus riders. The ability to specify independent catchment sizes for each cohort in the multi-modal model allows aspects of preparedness to travel to be investigated. Copyright © 2016. Published by Elsevier Ltd.

  6. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    PubMed

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Physical activity promotion in the primary care setting in pre- and type 2 diabetes - the Sophia step study, an RCT.

    PubMed

    Rossen, Jenny; Yngve, Agneta; Hagströmer, Maria; Brismar, Kerstin; Ainsworth, Barbara E; Iskull, Christina; Möller, Peter; Johansson, Unn-Britt

    2015-07-12

    Physical activity prevents or delays progression of impaired glucose tolerance in high-risk individuals. Physical activity promotion should serve as a basis in diabetes care. It is necessary to develop and evaluate health-promoting methods that are feasible as well as cost-effective within diabetes care. The aim of Sophia Step Study is to evaluate the impact of a multi-component and a single component physical activity intervention aiming at improving HbA1c (primary outcome) and other metabolic and cardiovascular risk factors, physical activity levels and overall health in patients with pre- and type 2 diabetes. Sophia Step Study is a randomized controlled trial and participants are randomly assigned to either a multi-component intervention group (A), a pedometer group (B) or a control group (C). In total, 310 patients will be included and followed for 24 months. Group A participants are offered pedometers and a website to register steps, physical activity on prescription with yearly follow-ups, motivational interviewing (10 occasions) and group consultations (including walks, 12 occasions). Group B participants are offered pedometers and a website to register steps. Group C are offered usual care. The theoretical framework underpinning the interventions is the Health Belief Model, the Stages of Change Model, and the Social Cognitive Theory. Both the multi-component intervention (group A) and the pedometer intervention (group B) are using several techniques for behavior change such as self-monitoring, goal setting, feedback and relapse prevention. Measurements are made at week 0, 8, 12, 16, month 6, 9, 12, 18 and 24, including metabolic and cardiovascular biomarkers (HbA1c as primary health outcome), accelerometry and daily steps. Furthermore, questionnaires were used to evaluate dietary intake, physical activity, perceived ability to perform physical activity, perceived support for being active, quality of life, anxiety, depression, well-being, perceived treatment, perceived stress and diabetes self- efficacy. This study will show if a multi-component intervention using pedometers with group- and individual consultations is more effective than a single- component intervention using pedometers alone, in increasing physical activity and improving HbA1c, other metabolic and cardiovascular risk factors, physical activity levels and overall health in patients with pre- and type 2 diabetes. ClinicalTrials.gov Identifier: NCT02374788 . Registered 28 January 2015.

  8. Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models

    NASA Astrophysics Data System (ADS)

    Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.

    2013-12-01

    We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.

  9. [A simulation study with finite element model on the unequal loss of peripheral vision caused by acceleration].

    PubMed

    Geng, Xiaoqi; Liu, Xiaoyu; Liu, Songyang; Xu, Yan; Zhao, Xianliang; Wang, Jie; Fan, Yubo

    2017-04-01

    An unequal loss of peripheral vision may happen with high sustaining multi-axis acceleration, leading to a great potential flight safety hazard. In the present research, finite element method was used to study the mechanism of unequal loss of peripheral vision. Firstly, a 3D geometric model of skull was developed based on the adult computer tomography (CT) images. The model of double eyes was created by mirroring with the previous right eye model. Then, the double-eye model was matched to the skull model, and fat was filled between eyeballs and skull. Acceleration loads of head-to-foot (G z ), right-to-left (G y ), chest-to-back (G x ) and multi-axis directions were applied to the current model to simulate dynamic response of retina by explicit dynamics solution. The results showed that the relative strain of double eyes was 25.7% under multi-axis acceleration load. Moreover, the strain distributions showed a significant difference among acceleration loaded in different directions. It indicated that a finite element model of double eyes was an effective means to study the mechanism of an unequal loss of peripheral vision at sustaining high multi-axis acceleration.

  10. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  11. Evaluation of transtension and transpression within contractional fault steps: Comparing kinematic and mechanical models to field data

    NASA Astrophysics Data System (ADS)

    Nevitt, Johanna M.; Pollard, David D.; Warren, Jessica M.

    2014-03-01

    Rock deformation often is investigated using kinematic and/or mechanical models. Here we provide a direct comparison of these modeling techniques in the context of a deformed dike within a meter-scale contractional fault step. The kinematic models consider two possible shear plane orientations and various modes of deformation (simple shear, transtension, transpression), while the mechanical model uses the finite element method and assumes elastoplastic constitutive behavior. The results for the kinematic and mechanical models are directly compared using the modeled maximum and minimum principal stretches. The kinematic analysis indicates that the contractional step may be classified as either transtensional or transpressional depending on the modeled shear plane orientation, suggesting that these terms may be inappropriate descriptors of step-related deformation. While the kinematic models do an acceptable job of depicting the change in dike shape and orientation, they are restricted to a prescribed homogeneous deformation. In contrast, the mechanical model allows for heterogeneous deformation within the step to accurately represent the deformation. The ability to characterize heterogeneous deformation and include fault slip - not as a prescription, but as a solution to the governing equations of motion - represents a significant advantage of the mechanical model over the kinematic models.

  12. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.

    2015-07-16

    We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less

  13. Slip Continuity in Explicit Crystal Plasticity Simulations Using Nonlocal Continuum and Semi-discrete Approaches

    DTIC Science & Technology

    2013-01-01

    Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity

  14. Nondestructive Intervention to Multi-Agent Systems through an Intelligent Agent

    PubMed Central

    Han, Jing; Wang, Lin

    2013-01-01

    For a given multi-agent system where the local interaction rule of the existing agents can not be re-designed, one way to intervene the collective behavior of the system is to add one or a few special agents into the group which are still treated as normal agents by the existing ones. We study how to lead a Vicsek-like flocking model to reach synchronization by adding special agents. A popular method is to add some simple leaders (fixed-headings agents). However, we add one intelligent agent, called ‘shill’, which uses online feedback information of the group to decide the shill's moving direction at each step. A novel strategy for the shill to coordinate the group is proposed. It is strictly proved that a shill with this strategy and a limited speed can synchronize every agent in the group. The computer simulations show the effectiveness of this strategy in different scenarios, including different group sizes, shill speed, and with or without noise. Compared to the method of adding some fixed-heading leaders, our method can guarantee synchronization for any initial configuration in the deterministic scenario and improve the synchronization level significantly in low density groups, or model with noise. This suggests the advantage and power of feedback information in intervention of collective behavior. PMID:23658695

  15. A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits

    NASA Astrophysics Data System (ADS)

    Moradi, Behzad; Mirzaei, Abdolreza

    2016-11-01

    A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.

  16. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  17. Transcriptome Profiling of Khat (Catha edulis) and Ephedra sinica Reveals Gene Candidates Potentially Involved in Amphetamine-Type Alkaloid Biosynthesis

    PubMed Central

    Groves, Ryan A.; Hagel, Jillian M.; Zhang, Ye; Kilpatrick, Korey; Levy, Asaf; Marsolais, Frédéric; Lewinsohn, Efraim; Sensen, Christoph W.; Facchini, Peter J.

    2015-01-01

    Amphetamine analogues are produced by plants in the genus Ephedra and by khat (Catha edulis), and include the widely used decongestants and appetite suppressants (1S,2S)-pseudoephedrine and (1R,2S)-ephedrine. The production of these metabolites, which derive from L-phenylalanine, involves a multi-step pathway partially mapped out at the biochemical level using knowledge of benzoic acid metabolism established in other plants, and direct evidence using khat and Ephedra species as model systems. Despite the commercial importance of amphetamine-type alkaloids, only a single step in their biosynthesis has been elucidated at the molecular level. We have employed Illumina next-generation sequencing technology, paired with Trinity and Velvet-Oases assembly platforms, to establish data-mining frameworks for Ephedra sinica and khat plants. Sequence libraries representing a combined 200,000 unigenes were subjected to an annotation pipeline involving direct searches against public databases. Annotations included the assignment of Gene Ontology (GO) terms used to allocate unigenes to functional categories. As part of our functional genomics program aimed at novel gene discovery, the databases were mined for enzyme candidates putatively involved in alkaloid biosynthesis. Queries used for mining included enzymes with established roles in benzoic acid metabolism, as well as enzymes catalyzing reactions similar to those predicted for amphetamine alkaloid metabolism. Gene candidates were evaluated based on phylogenetic relationships, FPKM-based expression data, and mechanistic considerations. Establishment of expansive sequence resources is a critical step toward pathway characterization, a goal with both academic and industrial implications. PMID:25806807

  18. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  19. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  20. Ultrasonic guided wave propagation across waveguide transitions: energy transfer and mode conversion.

    PubMed

    Puthillath, Padmakumar; Galan, Jose M; Ren, Baiyang; Lissenden, Cliff J; Rose, Joseph L

    2013-05-01

    Ultrasonic guided wave inspection of structures containing adhesively bonded joints requires an understanding of the interaction of guided waves with geometric and material discontinuities or transitions in the waveguide. Such interactions result in mode conversion with energy being partitioned among the reflected and transmitted modes. The step transition between an aluminum layer and an aluminum-adhesive-aluminum multi-layer waveguide is analyzed as a model structure. Dispersion analysis enables assessment of (i) synchronism through dispersion curve overlap and (ii) wavestructure correlation. Mode-pairs in the multi-layer waveguide are defined relative to a prescribed mode in a single layer as being synchronized and having nearly perfect wavestructure matching. Only a limited number of mode-pairs exist, and each has a unique frequency range. A hybrid model based on semi-analytical finite elements and the normal mode expansion is implemented to assess mode conversion at a step transition in a waveguide. The model results indicate that synchronism and wavestructure matching is associated with energy transfer through the step transition, and that the energy of an incident wave mode in a single layer is transmitted almost entirely to the associated mode-pair, where one exists. This analysis guides the selection of incident modes that convert into transmitted modes and improve adhesive joint inspection with ultrasonic guided waves.

  1. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Synthetic Biology for Cell-Free Biosynthesis: Fundamentals of Designing Novel In Vitro Multi-Enzyme Reaction Networks.

    PubMed

    Morgado, Gaspar; Gerngross, Daniel; Roberts, Tania M; Panke, Sven

    Cell-free biosynthesis in the form of in vitro multi-enzyme reaction networks or enzyme cascade reactions emerges as a promising tool to carry out complex catalysis in one-step, one-vessel settings. It combines the advantages of well-established in vitro biocatalysis with the power of multi-step in vivo pathways. Such cascades have been successfully applied to the synthesis of fine and bulk chemicals, monomers and complex polymers of chemical importance, and energy molecules from renewable resources as well as electricity. The scale of these initial attempts remains small, suggesting that more robust control of such systems and more efficient optimization are currently major bottlenecks. To this end, the very nature of enzyme cascade reactions as multi-membered systems requires novel approaches for implementation and optimization, some of which can be obtained from in vivo disciplines (such as pathway refactoring and DNA assembly), and some of which can be built on the unique, cell-free properties of cascade reactions (such as easy analytical access to all system intermediates to facilitate modeling).

  3. Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity

    ERIC Educational Resources Information Center

    Volpe, Robert J.; Briesch, Amy M.

    2015-01-01

    Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…

  4. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. NGA West 2 | Pacific Earthquake Engineering Research Center

    Science.gov Websites

    , multi-year research program to improve Next Generation Attenuation models for active tectonic regions earthquake engineering, including modeling of directivity and directionality; verification of NGA-West models epistemic uncertainty; and evaluation of soil amplification factors in NGA models versus NEHRP site factors

  6. Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing

    PubMed Central

    Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun

    2016-01-01

    Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267

  7. Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun

    2016-07-01

    Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps).

  8. Data-based control of a multi-step forming process

    NASA Astrophysics Data System (ADS)

    Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.

    2017-09-01

    The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.

  9. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.

  10. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  11. Multi-step routes of capuchin monkeys in a laser pointer traveling salesman task.

    PubMed

    Howard, Allison M; Fragaszy, Dorothy M

    2014-09-01

    Prior studies have claimed that nonhuman primates plan their routes multiple steps in advance. However, a recent reexamination of multi-step route planning in nonhuman primates indicated that there is no evidence for planning more than one step ahead. We tested multi-step route planning in capuchin monkeys using a pointing device to "travel" to distal targets while stationary. This device enabled us to determine whether capuchins distinguish the spatial relationship between goals and themselves and spatial relationships between goals and the laser dot, allocentrically. In Experiment 1, two subjects were presented with identical food items in Near-Far (one item nearer to subject) and Equidistant (both items equidistant from subject) conditions with a laser dot visible between the items. Subjects moved the laser dot to the items using a joystick. In the Near-Far condition, one subject demonstrated a bias for items closest to self but the other subject chose efficiently. In the second experiment, subjects retrieved three food items in similar Near-Far and Equidistant arrangements. Both subjects preferred food items nearest the laser dot and showed no evidence of multi-step route planning. We conclude that these capuchins do not make choices on the basis of multi-step look ahead strategies. © 2014 Wiley Periodicals, Inc.

  12. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  13. Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use.

    PubMed

    Campbell, Rebecca; Townsend, Stephanie M; Shaw, Jessica; Karim, Nidal; Markowitz, Jenifer

    2015-10-01

    In large-scale, multi-site contexts, developing and disseminating practitioner-oriented evaluation toolkits are an increasingly common strategy for building evaluation capacity. Toolkits explain the evaluation process, present evaluation design choices, and offer step-by-step guidance to practitioners. To date, there has been limited research on whether such resources truly foster the successful design, implementation, and use of evaluation findings. In this paper, we describe a multi-site project in which we developed a practitioner evaluation toolkit and then studied the extent to which the toolkit and accompanying technical assistance was effective in promoting successful completion of local-level evaluations and fostering instrumental use of the findings (i.e., whether programs directly used their findings to improve practice, see Patton, 2008). Forensic nurse practitioners from six geographically dispersed service programs completed methodologically rigorous evaluations; furthermore, all six programs used the findings to create programmatic and community-level changes to improve local practice. Implications for evaluation capacity building are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  15. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  16. Assessing Strain Mapping by Electron Backscatter Diffraction and Confocal Raman Microscopy Using Wedge-indented Si

    PubMed Central

    Friedman, Lawrence H.; Vaudin, Mark D.; Stranick, Stephan J.; Stan, Gheorghe; Gerbig, Yvonne B.; Osborn, William; Cook, Robert F.

    2016-01-01

    The accuracy of electron backscatter diffraction (EBSD) and confocal Raman microscopy (CRM) for small-scale strain mapping are assessed using the multi-axial strain field surrounding a wedge indentation in Si as a test vehicle. The strain field is modeled using finite element analysis (FEA) that is adapted to the near-indentation surface profile measured by atomic force microscopy (AFM). The assessment consists of (1) direct experimental comparisons of strain and deformation and (2) comparisons in which the modeled strain field is used as an intermediate step. Direct experimental methods (1) consist of comparisons of surface elevation and gradient measured by AFM and EBSD and of Raman shifts measured and predicted by CRM and EBSD, respectively. Comparisons that utilize the combined FEA-AFM model (2) consist of predictions of distortion, strain, and rotation for comparison with EBSD measurements and predictions of Raman shift for comparison with CRM measurements. For both EBSD and CRM, convolution of measurements in depth-varying strain fields is considered. The interconnected comparisons suggest that EBSD was able to provide an accurate assessment of the wedge indentation deformation field to within the precision of the measurements, approximately 2 × 10−4 in strain. CRM was similarly precise, but was limited in accuracy to several times this value. PMID:26939030

  17. Magnetic field extrapolation with MHD relaxation using AWSoM

    NASA Astrophysics Data System (ADS)

    Shi, T.; Manchester, W.; Landi, E.

    2017-12-01

    Coronal mass ejections are known to be the major source of disturbances in the solar wind capable of affecting geomagnetic environments. In order for accurate predictions of such space weather events, a data-driven simulation is needed. The first step towards such a simulation is to extrapolate the magnetic field from the observed field that is only at the solar surface. Here we present results of a new code of magnetic field extrapolation with direct magnetohydrodynamics (MHD) relaxation using the Alfvén Wave Solar Model (AWSoM) in the Space Weather Modeling Framework. The obtained field is self-consistent with our model and can be used later in time-dependent simulations without modifications of the equations. We use the Low and Lou analytical solution to test our results and they reach a good agreement. We also extrapolate the magnetic field from the observed data. We then specify the active region corona field with this extrapolation result in the AWSoM model and self-consistently calculate the temperature of the active region loops with Alfvén wave dissipation. Multi-wavelength images are also synthesized.

  18. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  19. Implementation of structure-mapping inference by event-file binding and action planning: a model of tool-improvisation analogies.

    PubMed

    Fields, Chris

    2011-03-01

    Structure-mapping inferences are generally regarded as dependent upon relational concepts that are understood and expressible in language by subjects capable of analogical reasoning. However, tool-improvisation inferences are executed by members of a variety of non-human primate and other species. Tool improvisation requires correctly inferring the motion and force-transfer affordances of an object; hence tool improvisation requires structure mapping driven by relational properties. Observational and experimental evidence can be interpreted to indicate that structure-mapping analogies in tool improvisation are implemented by multi-step manipulation of event files by binding and action-planning mechanisms that act in a language-independent manner. A functional model of language-independent event-file manipulations that implement structure mapping in the tool-improvisation domain is developed. This model provides a mechanism by which motion and force representations commonly employed in tool-improvisation structure mappings may be sufficiently reinforced to be available to inwardly directed attention and hence conceptualization. Predictions and potential experimental tests of this model are outlined.

  20. Healthy and productive workers: using intervention mapping to design a workplace health promotion and wellness program to improve presenteeism.

    PubMed

    Ammendolia, Carlo; Côté, Pierre; Cancelliere, Carol; Cassidy, J David; Hartvigsen, Jan; Boyle, Eleanor; Soklaridis, Sophie; Stern, Paula; Amick, Benjamin

    2016-11-25

    Presenteeism is a growing problem in developed countries mostly due to an aging workforce. The economic costs related to presenteeism exceed those of absenteeism and employer health costs. Employers are implementing workplace health promotion and wellness programs to improve health among workers and reduce presenteeism. How best to design, integrate and deliver these programs are unknown. The main purpose of this study was to use an intervention mapping approach to develop a workplace health promotion and wellness program aimed at reducing presenteeism. We partnered with a large international financial services company and used a qualitative synthesis based on an intervention mapping methodology. Evidence from systematic reviews and key articles on reducing presenteeism and implementing health promotion programs was combined with theoretical models for changing behavior and stakeholder experience. This was then systematically operationalized into a program using discussion groups and consensus among experts and stakeholders. The top health problem impacting our workplace partner was mental health. Depression and stress were the first and second highest cause of productivity loss respectively. A multi-pronged program with detailed action steps was developed and directed at key stakeholders and health conditions. For mental health, regular sharing focus groups, social networking, monthly personal stories from leadership using webinars and multi-media communications, expert-led workshops, lunch and learn sessions and manager and employee training were part of a comprehensive program. Comprehensive, specific and multi-pronged strategies were developed and aimed at encouraging healthy behaviours that impact presenteeism such as regular exercise, proper nutrition, adequate sleep, smoking cessation, socialization and work-life balance. Limitations of the intervention mapping process included high resource and time requirements, the lack of external input and viewpoints skewed towards middle and upper management, and using secondary workplace data of unknown validity and reliability. In general, intervention mapping was a useful method to develop a workplace health promotion and wellness program aimed at reducing presenteeism. The methodology provided a step-by-step process to unravel a complex problem. The process compelled participants to think critically, collaboratively and in nontraditional ways.

  1. Multi-objective optimization of process parameters of multi-step shaft formed with cross wedge rolling based on orthogonal test

    NASA Astrophysics Data System (ADS)

    Han, S. T.; Shu, X. D.; Shchukin, V.; Kozhevnikova, G.

    2018-06-01

    In order to achieve reasonable process parameters in forming multi-step shaft by cross wedge rolling, the research studied the rolling-forming process multi-step shaft on the DEFORM-3D finite element software. The interactive orthogonal experiment was used to study the effect of the eight parameters, the first section shrinkage rate φ1, the first forming angle α1, the first spreading angle β1, the first spreading length L1, the second section shrinkage rate φ2, the second forming angle α2, the second spreading angle β2 and the second spreading length L2, on the quality of shaft end and the microstructure uniformity. By using the fuzzy mathematics comprehensive evaluation method and the extreme difference analysis, the influence degree of the process parameters on the quality of the multi-step shaft is obtained: β2>φ2L1>α1>β1>φ1>α2L2. The results of the study can provide guidance for obtaining multi-stepped shaft with high mechanical properties and achieving near net forming without stub bar in cross wedge rolling.

  2. Computational Analysis of Multi-Rotor Flows

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Lee, Henry C.; Pulliam, Thomas H.

    2016-01-01

    Interactional aerodynamics of multi-rotor flows has been studied for a quadcopter representing a generic quad tilt-rotor aircraft in hover. The objective of the present study is to investigate the effects of the separation distances between rotors, and also fuselage and wings on the performance and efficiency of multirotor systems. Three-dimensional unsteady Navier-Stokes equations are solved using a spatially 5th order accurate scheme, dual-time stepping, and the Detached Eddy Simulation turbulence model. The results show that the separation distances as well as the wings have significant effects on the vertical forces of quadroror systems in hover. Understanding interactions in multi-rotor flows would help improve the design of next generation multi-rotor drones.

  3. Discovery of multi-ring basins - Gestalt perception in planetary science

    NASA Technical Reports Server (NTRS)

    Hartmann, W. K.

    1981-01-01

    Early selenographers resolved individual structural components of multi-ring basin systems but missed the underlying large-scale multi-ring basin patterns. The recognition of multi-ring basins as a general class of planetary features can be divided into five steps. Gilbert (1893) took a first step in recognizing radial 'sculpture' around the Imbrium basin system. Several writers through the 1940's rediscovered the radial sculpture and extended this concept by describing concentric rings around several circular maria. Some reminiscences are given about the fourth step - discovery of the Orientale basin and other basin systems by rectified lunar photography at the University of Arizona in 1961-62. Multi-ring basins remained a lunar phenomenon until the fifth step - discovery of similar systems of features on other planets, such as Mars (1972), Mercury (1974), and possibly Callisto and Ganymede (1979). This sequence is an example of gestalt recognition whose implications for scientific research are discussed.

  4. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  5. Additivity and maximum likelihood estimation of nonlinear component biomass models

    Treesearch

    David L.R. Affleck

    2015-01-01

    Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...

  6. Video Modeling and Prompting in Practice: Teaching Cooking Skills

    ERIC Educational Resources Information Center

    Kellems, Ryan O.; Mourra, Kjerstin; Morgan, Robert L.; Riesen, Tim; Glasgow, Malinda; Huddleston, Robin

    2016-01-01

    This article discusses the creation of video modeling (VM) and video prompting (VP) interventions for teaching novel multi-step tasks to individuals with disabilities. This article reviews factors to consider when selecting skills to teach, and students for whom VM/VP may be successful, as well as the difference between VM and VP and circumstances…

  7. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  8. A Model for Direction Sensing in Dictyostelium discoideum: Ras Activity and Symmetry Breaking Driven by a Gβγ-Mediated, Gα2-Ric8 -- Dependent Signal Transduction Network

    PubMed Central

    Cheng, Yougan; Othmer, Hans

    2016-01-01

    Chemotaxis is a dynamic cellular process, comprised of direction sensing, polarization and locomotion, that leads to the directed movement of eukaryotic cells along extracellular gradients. As a primary step in the response of an individual cell to a spatial stimulus, direction sensing has attracted numerous theoretical treatments aimed at explaining experimental observations in a variety of cell types. Here we propose a new model of direction sensing based on experiments using Dictyostelium discoideum (Dicty). The model is built around a reaction-diffusion-translocation system that involves three main component processes: a signal detection step based on G-protein-coupled receptors (GPCR) for cyclic AMP (cAMP), a transduction step based on a heterotrimetic G protein Gα2βγ, and an activation step of a monomeric G-protein Ras. The model can predict the experimentally-observed response of cells treated with latrunculin A, which removes feedback from downstream processes, under a variety of stimulus protocols. We show that Gα2βγ cycling modulated by Ric8, a nonreceptor guanine exchange factor for Gα2 in Dicty, drives multiple phases of Ras activation and leads to direction sensing and signal amplification in cAMP gradients. The model predicts that both Gα2 and Gβγ are essential for direction sensing, in that membrane-localized Gα2*, the activated GTP-bearing form of Gα2, leads to asymmetrical recruitment of RasGEF and Ric8, while globally-diffusing Gβγ mediates their activation. We show that the predicted response at the level of Ras activation encodes sufficient ‘memory’ to eliminate the ‘back-of-the wave’ problem, and the effects of diffusion and cell shape on direction sensing are also investigated. In contrast with existing LEGI models of chemotaxis, the results do not require a disparity between the diffusion coefficients of the Ras activator GEF and the Ras inhibitor GAP. Since the signal pathways we study are highly conserved between Dicty and mammalian leukocytes, the model can serve as a generic one for direction sensing. PMID:27152956

  9. An Ecological Approach to Learning Dynamics

    ERIC Educational Resources Information Center

    Normak, Peeter; Pata, Kai; Kaipainen, Mauri

    2012-01-01

    New approaches to emergent learner-directed learning design can be strengthened with a theoretical framework that considers learning as a dynamic process. We propose an approach that models a learning process using a set of spatial concepts: learning space, position of a learner, niche, perspective, step, path, direction of a step and step…

  10. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  11. Magnesite Step Growth Rates as a Function of the Aqueous Magnesium:Carbonate Ratio

    DOE PAGES

    Bracco, Jacquelyn N.; Stack, Andrew G.; Higgins, Steven R.

    2014-10-01

    Step velocities of monolayer-height steps on the (101 ⁻4) magnesite surface have been measured as functions of the aqueous magnesium-to-carbonate ratio and saturation index (SI) using a hydrothermal atomic force microscope (HAFM). At SI ≤ 1.9 and 80-90 °C, step velocities were found to be invariant with changes in the magnesium-to-carbonate ratio, an observation in contrast with standard models for growth and dissolution of ionically-bonded, multi-component crystals. However, at high saturation indices (SI = 2.15), step velocities displayed a ratio dependence, maximized at magnesium-to-carbonate ratios slightly greater than 1:1. Traditional affinity-based models were unable to describe growth rates at themore » higher saturation index. Step velocities also could not be modeled solely through nucleation of kink sites, in contrast to other minerals whose bonding between constituent ions is also dominantly ionic in nature, such as calcite and barite. Instead, they could be described only by a model that incorporates both kink nucleation and propagation. Based on observed step morphological changes at these higher saturation indices, the step velocity maximum at SI = 2.15 is likely due to the rate of attachment to propagating kink sites overcoming the rate of detachment from kink sites as the latter becomes less significant under far from equilibrium conditions.« less

  12. An Integer Programming Model for Multi-Echelon Supply Chain Decision Problem Considering Inventories

    NASA Astrophysics Data System (ADS)

    Harahap, Amin; Mawengkang, Herman; Siswadi; Effendi, Syahril

    2018-01-01

    In this paper we address a problem that is of significance to the industry, namely the optimal decision of a multi-echelon supply chain and the associated inventory systems. By using the guaranteed service approach to model the multi-echelon inventory system, we develop a mixed integer; programming model to simultaneously optimize the transportation, inventory and network structure of a multi-echelon supply chain. To solve the model we develop a direct search approach using a strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points.

  13. The role of Myc-induced protein synthesis in cancer

    PubMed Central

    Ruggero, Davide

    2009-01-01

    Deregulation in different steps of translational control is an emerging mechanism for cancer formation. One example of an oncogene with a direct role in control of translation is the Myc transcription factor. Myc directly increases protein synthesis rates by controlling the expression of multiple components of the protein synthetic machinery, including ribosomal proteins, initiation factors of translation, Pol III and rDNA. However, the contribution of Myc-dependent increases in protein synthesis towards the multi-step process leading to cancer has remained unknown. Recent evidence strongly suggests that Myc oncogenic signaling may monopolize the translational machinery to elicit cooperative effects on cell growth, cell cycle progression, and genome instability as a mechanism for cancer initiation. Moreover, new genetic tools to restore aberrant increases in protein synthesis control are now available, which should enable the dissection of important mechanisms in cancer that rely on the translational machinery. PMID:19934336

  14. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.

  15. Laser 3D micro-manufacturing

    NASA Astrophysics Data System (ADS)

    Piqué, Alberto; Auyeung, Raymond C. Y.; Kim, Heungsoo; Charipar, Nicholas A.; Mathews, Scott A.

    2016-06-01

    Laser-based materials processing techniques are gaining widespread use in micro-manufacturing applications. The use of laser microfabrication techniques enables the processing of micro- and nanostructures from a wide range of materials and geometries without the need for masking and etching steps commonly associated with photolithography. This review aims to describe the broad applications space covered by laser-based micro- and nanoprocessing techniques and the benefits offered by the use of lasers in micro-manufacturing processes. Given their non-lithographic nature, these processes are also referred to as laser direct-write and constitute some of the earliest demonstrations of 3D printing or additive manufacturing at the microscale. As this review will show, the use of lasers enables precise control of the various types of processing steps—from subtractive to additive—over a wide range of scales with an extensive materials palette. Overall, laser-based direct-write techniques offer multiple modes of operation including the removal (via ablative processes) and addition (via photopolymerization or printing) of most classes of materials using the same equipment in many cases. The versatility provided by these multi-function, multi-material and multi-scale laser micro-manufacturing processes cannot be matched by photolithography nor with other direct-write microfabrication techniques and offer unique opportunities for current and future 3D micro-manufacturing applications.

  16. Thin-walled nanoscrolls by multi-step intercalation from tubular halloysite-10 Å and its rearrangement upon peroxide treatment

    NASA Astrophysics Data System (ADS)

    Zsirka, Balázs; Horváth, Erzsébet; Szabó, Péter; Juzsakova, Tatjána; Szilágyi, Róbert K.; Fertig, Dávid; Makó, Éva; Varga, Tamás; Kónya, Zoltán; Kukovecz, Ákos; Kristóf, János

    2017-03-01

    Surface modification of the halloysite-10 Å mineral with tubular morphology can be achieved by slightly modified procedures developed for the delamination of kaolinite minerals. The resulting delaminated halloysite nanoparticles have unexpected surface/morphological properties that display, new potentials in catalyst development. In this work, a four-step intercalation/delamination procedure is described for the preparation of thin-walled nanoscrolls from the multi-layered hydrated halloysite mineral that consists of (1) intercalation of halloysite with potassium acetate, (2) replacement intercalation with ethylene glycol, (3) replacement intercalation with hexylamine, and (4) delamination with toluene. The intercalation steps were followed by X-ray diffraction, transmission electron microscopy, N2 adsorption-desorption, thermogravimetry, and infrared spectroscopy. Delamination eliminated the crystalline order and the crystallite size along the 'c'-axis, increased the specific surface area, greatly decreased the thickness of the mineral tubes to a monolayer, and shifted the pore diameter toward the micropore region. Unexpectedly, the removal of residual organics from intercalation steps adsorbed at the nanoscroll surface with a peroxide treatment resulted in partial recovery of crystallinity and increase of crystallite size along the 'c'-crystal direction. The d(001) value showed a diffuse pattern at 7.4-7.7 Å due to the rearrangement of the thin-walled nanoscrolls toward the initial tubular morphology of the dehydrated halloysite-7 Å mineral.

  17. Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi

    We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).

  18. Rotationally invariant clustering of diffusion MRI data using spherical harmonics

    NASA Astrophysics Data System (ADS)

    Liptrot, Matthew; Lauze, François

    2016-03-01

    We present a simple approach to the voxelwise classification of brain tissue acquired with diffusion weighted MRI (DWI). The approach leverages the power of spherical harmonics to summarise the diffusion information, sampled at many points over a sphere, using only a handful of coefficients. We use simple features that are invariant to the rotation of the highly orientational diffusion data. This provides a way to directly classify voxels whose diffusion characteristics are similar yet whose primary diffusion orientations differ. Subsequent application of machine-learning to the spherical harmonic coefficients therefore may permit classification of DWI voxels according to their inferred underlying fibre properties, whilst ignoring the specifics of orientation. After smoothing apparent diffusion coefficients volumes, we apply a spherical harmonic transform, which models the multi-directional diffusion data as a collection of spherical basis functions. We use the derived coefficients as voxelwise feature vectors for classification. Using a simple Gaussian mixture model, we examined the classification performance for a range of sub-classes (3-20). The results were compared against existing alternatives for tissue classification e.g. fractional anisotropy (FA) or the standard model used by Camino.1 The approach was implemented on both two publicly-available datasets: an ex-vivo pig brain and in-vivo human brain from the Human Connectome Project (HCP). We have demonstrated how a robust classification of DWI data can be performed without the need for a model reconstruction step. This avoids the potential confounds and uncertainty that such models may impose, and has the benefit of being computable directly from the DWI volumes. As such, the method could prove useful in subsequent pre-processing stages, such as model fitting, where it could inform about individual voxel complexities and improve model parameter choice.

  19. The drivers of wildfire enlargement do not exhibit scale thresholds in southeastern Australian forests.

    PubMed

    Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick

    2016-10-01

    Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.

  20. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  1. Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.

    1999-11-01

    A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.

  2. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  3. Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping

    NASA Astrophysics Data System (ADS)

    Kadlec, Jiri

    This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7--1.2%. The output snow probability map data sets are published online using web applications and web services. Keywords: crowdsourcing, image analysis, interpolation, MODIS, R statistical software, snow cover, snowpack probability, Tethys platform, time series, WaterML, web services, winter sports.

  4. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  5. Digital Learning Material for Student-Directed Model Building in Molecular Biology

    ERIC Educational Resources Information Center

    Aegerter-Wilmsen, Tinri; Coppens, Marjolijn; Janssen, Fred; Hartog, Rob; Bisseling, Ton

    2005-01-01

    The building of models to explain data and make predictions constitutes an important goal in molecular biology research. To give students the opportunity to practice such model building, two digital cases had previously been developed in which students are guided to build a model step by step. In this article, the development and initial…

  6. Scenario driven data modelling: a method for integrating diverse sources of data and data streams

    PubMed Central

    2011-01-01

    Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854

  7. Tracking children's mental states while solving algebra equations.

    PubMed

    Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M

    2012-11-01

    Behavioral and function magnetic resonance imagery (fMRI) data were combined to infer the mental states of students as they interacted with an intelligent tutoring system. Sixteen children interacted with a computer tutor for solving linear equations over a six-day period (days 0-5), with days 1 and 5 occurring in an fMRI scanner. Hidden Markov model algorithms combined a model of student behavior with multi-voxel imaging pattern data to predict the mental states of students. We separately assessed the algorithms' ability to predict which step in a problem-solving sequence was performed and whether the step was performed correctly. For day 1, the data patterns of other students were used to predict the mental states of a target student. These predictions were improved on day 5 by adding information about the target student's behavioral and imaging data from day 1. Successful tracking of mental states depended on using the combination of a behavioral model and multi-voxel pattern analysis, illustrating the effectiveness of an integrated approach to tracking the cognition of individuals in real time as they perform complex tasks. Copyright © 2011 Wiley Periodicals, Inc.

  8. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learningmore » system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.« less

  9. Rapid prototyping of compliant human aortic roots for assessment of valved stents.

    PubMed

    Kalejs, Martins; von Segesser, Ludwig Karl

    2009-02-01

    Adequate in-vitro training in valved stents deployment as well as testing of the latter devices requires compliant real-size models of the human aortic root. The casting methods utilized up to now are multi-step, time consuming and complicated. We pursued a goal of building a flexible 3D model in a single-step procedure. We created a precise 3D CAD model of a human aortic root using previously published anatomical and geometrical data and printed it using a novel rapid prototyping system developed by the Fab@Home project. As a material for 3D fabrication we used common house-hold silicone and afterwards dip-coated several models with dispersion silicone one or two times. To assess the production precision we compared the size of the final product with the CAD model. Compliance of the models was measured and compared with native porcine aortic root. Total fabrication time was 3 h and 20 min. Dip-coating one or two times with dispersion silicone if applied took one or two extra days, respectively. The error in dimensions of non-coated aortic root model compared to the CAD design was <3.0% along X, Y-axes and 4.1% along Z-axis. Compliance of a non-coated model as judged by the changes of radius values in the radial direction by 16.39% is significantly different (P<0.001) from native aortic tissue--23.54% at the pressure of 80-100 mmHg. Rapid prototyping of compliant, life-size anatomical models with the Fab@Home 3D printer is feasible--it is very quick compared to previous casting methods.

  10. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  11. Development and validation of a numerical model for cross-section optimization of a multi-part probe for soft tissue intervention.

    PubMed

    Frasson, L; Neubert, J; Reina, S; Oldfield, M; Davies, B L; Rodriguez Y Baena, F

    2010-01-01

    The popularity of minimally invasive surgical procedures is driving the development of novel, safer and more accurate surgical tools. In this context a multi-part probe for soft tissue surgery is being developed in the Mechatronics in Medicine Laboratory at Imperial College, London. This study reports an optimization procedure using finite element methods, for the identification of an interlock geometry able to limit the separation of the segments composing the multi-part probe. An optimal geometry was obtained and the corresponding three-dimensional finite element model validated experimentally. Simulation results are shown to be consistent with the physical experiments. The outcome of this study is an important step in the provision of a novel miniature steerable probe for surgery.

  12. Turnover Time in the Hyporheic Zone as Assessed by 3D Geophysical Imaging

    NASA Astrophysics Data System (ADS)

    Kohler, B.; Hall, R. O., Jr.; Carr, B.

    2017-12-01

    The hyporheic zone (HZ) is a region of interest in stream hydrology and ecology, however, its heterogeneity across small spatial scales and difficulty to directly measure has hampered researchers' efforts to understand its specific contribution to processes such as solute transport and nutrient retention and removal. In recent years researchers have combined geophysical imaging, such as electrical resistivity tomography (ERT), with tracer additions to directly measure exchange between surface waters and the HZ without physically disrupting natural subsurface flow paths. We conducted constant-rate tracer additions in two small headwater mountain streams while collecting 3D ERT images downstream before, during, after each tracer addition to yield spatially comprehensive models of solute exchange with the HZ through time. From our 3D HZ models, we calculated the active volume of the HZ, normalized to the maximum measured size, for each time step giving a breakthrough curve of tracer abundance in the HZ through time. We then described the tracer's turnover time in the HZ by applying exponential and power decay models to the breakthrough curve of HZ volume in a similar manner that one would for a tracer breakthrough curve in surface waters. Our models suggest that the flushing of solutes from the HZ exhibit multi-domain behavior, where advective and diffusive exchange between HZ and surface waters occur simultaneously and operate at distinctly different rates.

  13. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  14. [Research Progress of Multi-Model Medical Image Fusion at Feature Level].

    PubMed

    Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun

    2016-04-01

    Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.

  15. Statistical post-processing of seasonal multi-model forecasts: Why is it so hard to beat the multi-model mean?

    NASA Astrophysics Data System (ADS)

    Siegert, Stefan

    2017-04-01

    Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.

  16. Stability analysis of the phytoplankton effect model on changes in nitrogen concentration on integrated multi-trophic aquaculture systems

    NASA Astrophysics Data System (ADS)

    Widowati; Putro, S. P.; Silfiana

    2018-05-01

    Integrated Multi-Trophic Aquaculture (IMTA) is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The interaction between phytoplankton and nitrogen as waste in fish cultivation including ammonia, nitrite, and nitrate studied in the form of mathematical models. The form model is non-linear systems of differential equations with the four variables. The analytical analysis was used to study the dynamic behavior of this model. Local stability analysis is performed at the equilibrium point with the first step linearized model by using Taylor series, then determined the Jacobian matrix. If all eigenvalues have negative real parts, then the equilibrium of the system is locally asymptotic stable. Some numerical simulations were also demonstrated to verify our analytical result.

  17. The ability of individuals to assess population density influences the evolution of emigration propensity and dispersal distance.

    PubMed

    Poethke, Hans Joachim; Gros, Andreas; Hovestadt, Thomas

    2011-08-07

    We analyze the simultaneous evolution of emigration and settlement decisions for actively dispersing species differing in their ability to assess population density. Using an individual-based model we simulate dispersal as a multi-step (patch to patch) movement in a world consisting of habitat patches surrounded by a hostile matrix. Each such step is associated with the same mortality risk. Our simulations show that individuals following an informed strategy, where emigration (and settlement) probability depends on local population density, evolve a lower (natal) emigration propensity but disperse over significantly larger distances - i.e. postpone settlement longer - than individuals performing density-independent emigration. This holds especially when variation in environmental conditions is spatially correlated. Both effects can be traced to the informed individuals' ability to better exploit existing heterogeneity in reproductive chances. Yet, already moderate distance-dependent dispersal costs prevent the evolution of multi-step (long-distance) dispersal, irrespective of the dispersal strategy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Surface Modified Particles By Multi-Step Michael-Type Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2005-05-03

    A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.

  19. Design and Implementation of Multi-Input Adaptive Signal Extractions.

    DTIC Science & Technology

    1982-09-01

    deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of

  20. Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Liu, Di; Mishra, Ashok K.; Yu, Zhongbo

    2016-07-01

    This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).

  1. Theory of Thermal Relaxation of Electrons in Semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadasivam, Sridhar; Chan, Maria K. Y.; Darancet, Pierre

    2017-09-01

    We compute the transient dynamics of phonons in contact with high energy ``hot'' charge carriers in 12 polar and non-polar semiconductors, using a first-principles Boltzmann transport framework. For most materials, we find that the decay in electronic temperature departs significantly from a single-exponential model at times ranging from 1 ps to 15 ps after electronic excitation, a phenomenon concomitant with the appearance of non-thermal vibrational modes. We demonstrate that these effects result from the slow thermalization within the phonon subsystem, caused by the large heterogeneity in the timescales of electron-phonon and phonon-phonon interactions in these materials. We propose a generalizedmore » 2-temperature model accounting for the phonon thermalization as a limiting step of electron-phonon thermalization, which captures the full thermal relaxation of hot electrons and holes in semiconductors. A direct consequence of our findings is that, for semiconductors, information about the spectral distribution of electron-phonon and phonon-phonon coupling can be extracted from the multi-exponential behavior of the electronic temperature.« less

  2. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be followed in the future.

  3. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    NASA Astrophysics Data System (ADS)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  4. Multi-species Management Using Modeling and Decision Theory Applications to Integrated Natural Resources Management Planning

    DTIC Science & Technology

    2008-06-01

    or just habitat area . They used linear interpolation to derive maps for each time step in the population model and population dynamics were...Metapopulation Map ............................................................................................................... 20  Figure 12. Habitat...Stephen’s kangaroo rat (SKR). In some areas of coastal sage scrub habitat short fire return intervals make the habitat suitable for the SKR while

  5. A splitting scheme based on the space-time CE/SE method for solving multi-dimensional hydrodynamical models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    2016-08-01

    Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  6. New methodology for mechanical characterization of human superficial facial tissue anisotropic behaviour in vivo.

    PubMed

    Then, C; Stassen, B; Depta, K; Silber, G

    2017-07-01

    Mechanical characterization of human superficial facial tissue has important applications in biomedical science, computer assisted forensics, graphics, and consumer goods development. Specifically, the latter may include facial hair removal devices. Predictive accuracy of numerical models and their ability to elucidate biomechanically relevant questions depends on the acquisition of experimental data and mechanical tissue behavior representation. Anisotropic viscoelastic behavioral characterization of human facial tissue, deformed in vivo with finite strain, however, is sparse. Employing an experimental-numerical approach, a procedure is presented to evaluate multidirectional tensile properties of superficial tissue layers of the face in vivo. Specifically, in addition to stress relaxation, displacement-controlled multi-step ramp-and-hold protocols were performed to separate elastic from inelastic properties. For numerical representation, an anisotropic hyperelastic material model in conjunction with a time domain linear viscoelasticity formulation with Prony series was employed. Model parameters were inversely derived, employing finite element models, using multi-criteria optimization. The methodology provides insight into mechanical superficial facial tissue properties. Experimental data shows pronounced anisotropy, especially with large strain. The stress relaxation rate does not depend on the loading direction, but is strain-dependent. Preconditioning eliminates equilibrium hysteresis effects and leads to stress-strain repeatability. In the preconditioned state tissue stiffness and hysteresis insensitivity to strain rate in the applied range is evident. The employed material model fits the nonlinear anisotropic elastic results and the viscoelasticity model reasonably reproduces time-dependent results. Inversely deduced maximum anisotropic long-term shear modulus of linear elasticity is G ∞,max aniso =2.43kPa and instantaneous initial shear modulus at an applied rate of ramp loading is G 0,max aniso =15.38kPa. Derived mechanical model parameters constitute a basis for complex skin interaction simulation. Copyright © 2017. Published by Elsevier Ltd.

  7. A novel method for a multi-level hierarchical composite with brick-and-mortar structure

    PubMed Central

    Brandt, Kristina; Wolff, Michael F. H.; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A.

    2013-01-01

    The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships. PMID:23900554

  8. A novel method for a multi-level hierarchical composite with brick-and-mortar structure.

    PubMed

    Brandt, Kristina; Wolff, Michael F H; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A

    2013-01-01

    The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships.

  9. A novel method for a multi-level hierarchical composite with brick-and-mortar structure

    NASA Astrophysics Data System (ADS)

    Brandt, Kristina; Wolff, Michael F. H.; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A.

    2013-07-01

    The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships.

  10. Multi-Fluid Moment Simulations of Ganymede using the Next-Generation OpenGGCM

    NASA Astrophysics Data System (ADS)

    Wang, L.; Germaschewski, K.; Hakim, A.; Bhattacharjee, A.; Raeder, J.

    2015-12-01

    We coupled the multi-fluid moment code Gkeyll[1,2] to the next-generation OpenGGCM[3], and studied the reconnection dynamics at the Ganymede. This work is part of our effort to tackle the grand challenge of integrating kinetic effects into global fluid models. The multi-fluid moment model integrates kinetic effects in that it can capture crucial kinetic physics like pressure tensor effects by evolving moments of the Vlasov equations for each species. This approach has advantages over previous models: desired kinetic effects, together with other important effects like the Hall effect, are self-consistently embedded in the moment equations, and can be efficiently implemented, while not suffering from severe time-step restriction due to plasma oscillation nor artificial whistler modes. This model also handles multiple ion species naturally, which opens up opportunties in investigating the role of oxygen in magnetospheric reconnection and improved coupling to ionosphere models. In this work, the multi-fluid moment solver in Gkeyll was wrapped as a time-stepping module for the high performance, highly flexible next-generation OpenGGCM. Gkeyll is only used to provide the local plasma solver, while computational aspects like parallelization and boundary conditions are handled entirely by OpenGGCM, including interfacing to other models like ionospheric boundary conditions provided by coupling with CTIM [3]. The coupled code is used to study the dynamics near Ganymede, and the results are compared with MHD and Hall MHD results by Dorelli et al. [4]. Hakim, A. (2008). Journal of Fusion Energy, 27, 36-43. Hakim, A., Loverich, J., & Shumlak, U. (2006). Journal of Computational Physics, 219, 418-442. Raeder, J., Larson, D., Li, W., Kepko, E. L., & Fuller-Rowell, T. (2008). Space Science Reviews, 141(1-4), 535-555. Dorelli, J. C., Glocer, A., Collinson, G., & Tóth, G. (2015). Journal of Geophysical Research: Space Physics, 120.

  11. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  12. A Multi-stage Carcinogenesis Model to Investigate Caloric Restriction as a Potential Tool for Post-irradiation Mitigation of Cancer Risk

    PubMed Central

    Tani, Shusuke; Blyth, Benjamin John; Shang, Yi; Morioka, Takamitsu; Kakinuma, Shizuko; Shimada, Yoshiya

    2016-01-01

    The risk of radiation-induced cancer adds to anxiety in low-dose exposed populations. Safe and effective lifestyle changes which can help mitigate excess cancer risk might provide exposed individuals the opportunity to pro-actively reduce their cancer risk, and improve mental health and well-being. Here, we applied a mathematical multi-stage carcinogenesis model to the mouse lifespan data using adult-onset caloric restriction following irradiation in early life. We re-evaluated autopsy records with a veterinary pathologist to determine which tumors were the probable causes of death in order to calculate age-specific mortality. The model revealed that in both irradiated and unirradiated mice, caloric restriction reduced the age-specific mortality of all solid tumors and hepatocellular carcinomas across most of the lifespan, with the mortality rate dependent more on age owing to an increase in the number of predicted rate-limiting steps. Conversely, irradiation did not significantly alter the number of steps, but did increase the overall transition rate between the steps. We show that the extent of the protective effect of caloric restriction is independent of the induction of cancer from radiation exposure, and discuss future avenues of research to explore the utility of caloric restriction as an example of a potential post-irradiation mitigation strategy. PMID:27390741

  13. OSOAA: A Vector Radiative Transfer Model of Coupled Atmosphere-Ocean System for a Rough Sea Surface Application to the Estimates of the Directional Variations of the Water Leaving Reflectance to Better Process Multi-angular Satellite Sensors Data Over the Ocean

    NASA Technical Reports Server (NTRS)

    Chami, Malik; LaFrance, Bruno; Fougnie, Bertrand; Chowdhary, Jacek; Harmel, Tristan; Waquet, Fabien

    2015-01-01

    In this study, we present a radiative transfer model, so-called OSOAA, that is able to predict the radiance and degree of polarization within the coupled atmosphere-ocean system in the presence of a rough sea surface. The OSOAA model solves the radiative transfer equation using the successive orders of scattering method. Comparisons with another operational radiative transfer model showed a satisfactory agreement within 0.8%. The OSOAA model has been designed with a graphical user interface to make it user friendly for the community. The radiance and degree of polarization are provided at any level, from the top of atmosphere to the ocean bottom. An application of the OSOAA model is carried out to quantify the directional variations of the water leaving reflectance and degree of polarization for phytoplankton and mineral-like dominated waters. The difference between the water leaving reflectance at a given geometry and that obtained for the nadir direction could reach 40%, thus questioning the Lambertian assumption of the sea surface that is used by inverse satellite algorithms dedicated to multi-angular sensors. It is shown as well that the directional features of the water leaving reflectance are weakly dependent on wind speed. The quantification of the directional variations of the water leaving reflectance obtained in this study should help to correctly exploit the satellite data that will be acquired by the current or forthcoming multi-angular satellite sensors.

  14. Supermodeling With A Global Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Wiegerinck, Wim; Burgers, Willem; Selten, Frank

    2013-04-01

    In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.

  15. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  16. Multi-modeling assessment of recent changes in groundwater resource: application to the semi-arid Haouz plain (Central Morocco)

    NASA Astrophysics Data System (ADS)

    Fakir, Younes; Brahim, Berjamy; Page Michel, Le; Fathallah, Sghrer; Houda, Nassah; Lionel, Jarlan; Raki Salah, Er; Vincent, Simonneaux; Said, Khabba

    2015-04-01

    The Haouz plain (6000 km2) is a part of the Tensift basin located in the Central Morocco. The plain has a semi-arid climate (250 mm/y of rainfall) and is bordered in the south by the High-Atlas mountains. Because the plain is highly anthropized, the water resources face heavy demands from various competing sectors, including agriculture (over than 273000 ha of irrigated areas), water supply for more than 2 million inhabitants and about 2 millions of tourists annually. Consequently the groundwater is being depleted on a large area of the plain, with problems of water scarcity which pose serious threats to water supplies and to sustainable development. The groundwater in the Haouz plain was modeled previously by MODFLOW (USGS groundwater numerical modeling) with annual time steps. In the present study a multi-modeling approach is applied. The aim is to enhance the evaluation of the groundwater pumping for irrigation, one of the most difficult data to estimate, and to improve the water balance assessment. In this purpose, two other models were added: SAMIR (Satellite Estimation of Agricultural Water Demand) and WEAP (integrated water resources planning). The three models are implemented at a monthly time step and calibrated over the 2001-2011 period, corresponding to 120 time steps. This multi-modeling allows assessing the evolution of water resources both in time and space. The results show deep changes during the last years which affect generally the water resources and groundwater particularly. These changes are induced by a remarkable urbanism development, succession of droughts, intensive agriculture activities and weak management of irrigation and water resources. Some indicators of these changes are as follow: (i) the groundwater table decrease varies between 1 to 3m/year, (ii) the groundwater depletion during the last ten year is equivalent to 50% of the lost reserves during 40 years, (iii) the annual groundwater deficit is about 100 hm3, (iv) the renewable water resources per capita are around 500 m3/year, (v) the agriculture takes 80% of the total water demand (vi) the net consumptive use of groundwater by agriculture represents 55 % of the total water consumed by agriculture. Consequently a strategy for water management for sustainable use is a pressing concern. In this frame, the multi-modeling system is expected to be a decision support system for present and future water resources management alternatives in the Haouz plain.

  17. Multi-criteria objective based climate change impact assessment for multi-purpose multi-reservoir systems

    NASA Astrophysics Data System (ADS)

    Müller, Ruben; Schütze, Niels

    2014-05-01

    Water resources systems with reservoirs are expected to be sensitive to climate change. Assessment studies that analyze the impact of climate change on the performance of reservoirs can be divided in two groups: (1) Studies that simulate the operation under projected inflows with the current set of operational rules. Due to non adapted operational rules the future performance of these reservoirs can be underestimated and the impact overestimated. (2) Studies that optimize the operational rules for best adaption of the system to the projected conditions before the assessment of the impact. The latter allows for estimating more realistically future performance and adaption strategies based on new operation rules are available if required. Multi-purpose reservoirs serve various, often conflicting functions. If all functions cannot be served simultaneously at a maximum level, an effective compromise between multiple objectives of the reservoir operation has to be provided. Yet under climate change the historically preferenced compromise may no longer be the most suitable compromise in the future. Therefore a multi-objective based climate change impact assessment approach for multi-purpose multi-reservoir systems is proposed in the study. Projected inflows are provided in a first step using a physically based rainfall-runoff model. In a second step, a time series model is applied to generate long-term inflow time series. Finally, the long-term inflow series are used as driving variables for a simulation-based multi-objective optimization of the reservoir system in order to derive optimal operation rules. As a result, the adapted Pareto-optimal set of diverse best compromise solutions can be presented to the decision maker in order to assist him in assessing climate change adaption measures with respect to the future performance of the multi-purpose reservoir system. The approach is tested on a multi-purpose multi-reservoir system in a mountainous catchment in Germany. A climate change assessment is performed for climate change scenarios based on the SRES emission scenarios A1B, B1 and A2 for a set of statistically downscaled meteorological data. The future performance of the multi-purpose multi-reservoir system is quantified and possible intensifications of trade-offs between management goals or reservoir utilizations are shown.

  18. Seismic signal time-frequency analysis based on multi-directional window using greedy strategy

    NASA Astrophysics Data System (ADS)

    Chen, Yingpin; Peng, Zhenming; Cheng, Zhuyuan; Tian, Lin

    2017-08-01

    Wigner-Ville distribution (WVD) is an important time-frequency analysis technology with a high energy distribution in seismic signal processing. However, it is interfered by many cross terms. To suppress the cross terms of the WVD and keep the concentration of its high energy distribution, an adaptive multi-directional filtering window in the ambiguity domain is proposed. This begins with the relationship of the Cohen distribution and the Gabor transform combining the greedy strategy and the rotational invariance property of the fractional Fourier transform in order to propose the multi-directional window, which extends the one-dimensional, one directional, optimal window function of the optimal fractional Gabor transform (OFrGT) to a two-dimensional, multi-directional window in the ambiguity domain. In this way, the multi-directional window matches the main auto terms of the WVD more precisely. Using the greedy strategy, the proposed window takes into account the optimal and other suboptimal directions, which also solves the problem of the OFrGT, called the local concentration phenomenon, when encountering a multi-component signal. Experiments on different types of both the signal models and the real seismic signals reveal that the proposed window can overcome the drawbacks of the WVD and the OFrGT mentioned above. Finally, the proposed method is applied to a seismic signal's spectral decomposition. The results show that the proposed method can explore the space distribution of a reservoir more precisely.

  19. Evaluating the Magnitude and Duration of Cold Load Pick-up on Residential Distribution Feeders Using Multi-State Load Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Sortomme, Eric; Venkata, S. S.

    The increased level of demand that is associated with the restoration of service after an outage, Cold Load Pick-Up (CLPU), can be significantly higher than pre-outage levels, even exceeding the normal distribution feeder peak demand. These high levels of demand can delay restoration efforts and in extreme cases damage equipment. The negative impacts of CLPU can be mitigated with strategies that restore the feeder in sections, minimizing the load current. The challenge for utilities is to manage the current level on critical equipment while minimizing the time to restore service to all customers. Accurately modeling CLPU events is the firstmore » step in developing improved restoration strategies that minimize restoration times. This paper presents a new method for evaluating the magnitude of the CLPU peak, and its duration, using multi-state load models. The use of multi-state load models allows for a more accurate representation of the end-use loads that are present on residential distribution feeders.« less

  20. Transient multi-physics analysis of a magnetorheological shock absorber with the inverse Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong

    2015-10-01

    This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.

  1. Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models

    NASA Astrophysics Data System (ADS)

    Shen, C.; Xia, J.; Mi, B.

    2016-12-01

    A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.

  2. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  3. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  4. Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.

    PubMed

    Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen

    2018-07-20

    Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Progress in multi-dimensional upwind differencing

    NASA Technical Reports Server (NTRS)

    Vanleer, Bram

    1992-01-01

    Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.

  6. Shadowing effects on multi-step Langmuir probe array on HL-2A tokamak

    NASA Astrophysics Data System (ADS)

    Ke, R.; Xu, M.; Nie, L.; Gao, Z.; Wu, Y.; Yuan, B.; Chen, J.; Song, X.; Yan, L.; Duan, X.

    2018-05-01

    Multi-step Langmuir probe arrays have been designed and installed on the HL-2A tokamak [1]–[2] to study the turbulent transport in the edge plasma, especially for the measurement of poloidal momentum flux, Reynolds stress Rs. However, except the probe tips on the top step, all other tips on lower steps are shadowed by graphite skeleton. It is necessary to estimate the shadowing effects on equilibrium and fluctuation measurement. In this paper, comparison of shadowed tips to unshadowed ones is presented. The results show that shadowing can strongly reduce the ion and electron effective collection area. However, its effect is negligible for the turbulence intensity and coherence measurement, confirming that the multi-step LP array is proper for the turbulent transport measurement.

  7. Fuzzy Edge Connectivity of Graphical Fuzzy State Space Model in Multi-connected System

    NASA Astrophysics Data System (ADS)

    Harish, Noor Ainy; Ismail, Razidah; Ahmad, Tahir

    2010-11-01

    Structured networks of interacting components illustrate complex structure in a direct or intuitive way. Graph theory provides a mathematical modeling for studying interconnection among elements in natural and man-made systems. On the other hand, directed graph is useful to define and interpret the interconnection structure underlying the dynamics of the interacting subsystem. Fuzzy theory provides important tools in dealing various aspects of complexity, imprecision and fuzziness of the network structure of a multi-connected system. Initial development for systems of Fuzzy State Space Model (FSSM) and a fuzzy algorithm approach were introduced with the purpose of solving the inverse problems in multivariable system. In this paper, fuzzy algorithm is adapted in order to determine the fuzzy edge connectivity between subsystems, in particular interconnected system of Graphical Representation of FSSM. This new approach will simplify the schematic diagram of interconnection of subsystems in a multi-connected system.

  8. Self-tuning multivariable pole placement control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1992-01-01

    This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.

  9. Ultrasonic inspection of rocket fuel model using laminated transducer and multi-channel step pulser

    NASA Astrophysics Data System (ADS)

    Mihara, T.; Hamajima, T.; Tashiro, H.; Sato, A.

    2013-01-01

    For the ultrasonic inspection for the packing of solid fuel in a rocket booster, an industrial inspection is difficult. Because the signal to noise ratio in ultrasonic inspection of rocket fuel become worse due to the large attenuation even using lower frequency ultrasound. For the improvement of this problem, we tried to applied the two techniques in ultrasonic inspection, one was the step function pulser system with the super wideband frequency properties and the other was the laminated element transducer. By combining these two techniques, we developed the new ultrasonic measurement system and demonstrated the advantages in ultrasonic inspection of rocket fuel model specimen.

  10. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    PubMed

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Standardized residual as response function for order identification of multi input intervention analysis

    NASA Astrophysics Data System (ADS)

    Suhartono, Lee, Muhammad Hisyam; Rezeki, Sri

    2017-05-01

    Intervention analysis is a statistical model in the group of time series analysis which is widely used to describe the effect of an intervention caused by external or internal factors. An example of external factors that often occurs in Indonesia is a disaster, both natural or man-made disaster. The main purpose of this paper is to provide the results of theoretical studies on identification step for determining the order of multi inputs intervention analysis for evaluating the magnitude and duration of the impact of interventions on time series data. The theoretical result showed that the standardized residuals could be used properly as response function for determining the order of multi inputs intervention model. Then, these results are applied for evaluating the impact of a disaster on a real case in Indonesia, i.e. the magnitude and duration of the impact of the Lapindo mud on the volume of vehicles on the highway. Moreover, the empirical results showed that the multi inputs intervention model can describe and explain accurately the magnitude and duration of the impact of disasters on a time series data.

  12. The Madden-Julian Oscillation in the NCAR Community Earth System Model Coupled Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Anderson, J. L.; Moncrieff, M.; Collins, N.; Danabasoglu, G.; Hoar, T.; Karspeck, A. R.; Neale, R. B.; Raeder, K.; Tribbia, J. J.

    2014-12-01

    We present a quantitative evaluation of the simulated MJO in analyses produced with a coupled data assimilation (CDA) framework developed at the National Center for Atmosphere Research. This system is based on the Community Earth System Model (CESM; previously known as the Community Climate System Model -CCSM) interfaced to a community facility for ensemble data assimilation (Data Assimilation Research Testbed - DART). The system (multi-component CDA) assimilates data into each of the respective ocean/atmosphere/land model components during the assimilation step followed by an exchange of information between the model components during the forecast step. Note that this is an advancement over many existing prototypes of coupled data assimilation systems, which typically assimilate observations only in one of the model components (i.e., single-component CDA). The more realistic treatment of air-sea interactions and improvements to the model mean state in the multi-component CDA recover many aspects of MJO representation, from its space-time structure and propagation (see Figure 1) to the governing relationships between precipitation and sea surface temperature on intra-seasonal scales. Standard qualitative and process-based diagnostics identified by the MJO Task Force (currently under the auspices of the Working Group on Numerical Experimentation) have been used to detect the MJO signals across a suite of coupled model experiments involving both multi-component and single-component DA experiments as well as a free run of the coupled CESM model (i.e., CMIP5 style without data assimilation). Short predictability experiments during the boreal winter are used to demonstrate that the decay rates of the MJO convective anomalies are slower in the multi-component CDA system, which allows it to retain the MJO dynamics for a longer period. We anticipate that the knowledge gained through this study will enhance our understanding of the MJO feedback mechanisms across the air-sea interface, especially regarding ocean impacts on the MJO as well as highlight the capability of coupled data assimilation systems for related tropical intraseasonal variability predictions.

  13. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  14. Bayesian Framework Approach for Prognostic Studies in Electrolytic Capacitor under Thermal Overstress Conditions

    DTIC Science & Technology

    2012-09-01

    make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma

  15. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  16. Planning activity for internally generated reward goals in monkey amygdala neurons

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    The best rewards are often distant and can only be achieved by planning and decision-making over several steps. We designed a multi-step choice task in which monkeys followed internal plans to save rewards towards self-defined goals. During this self-controlled behavior, amygdala neurons showed future-oriented activity that reflected the animal’s plan to obtain specific rewards several trials ahead. This prospective activity encoded crucial components of the animal’s plan, including value and length of the planned choice sequence. It began on initial trials when a plan would be formed, reappeared step-by-step until reward receipt, and readily updated with a new sequence. It predicted performance, including errors, and typically disappeared during instructed behavior. Such prospective activity could underlie the formation and pursuit of internal plans characteristic for goal-directed behavior. The existence of neuronal planning activity in the amygdala suggests an important role for this structure in guiding behavior towards internally generated, distant goals. PMID:25622146

  17. Restoring fish ecological quality in estuaries: Implication of interactive and cumulative effects among anthropogenic stressors.

    PubMed

    Teichert, Nils; Borja, Angel; Chust, Guillem; Uriarte, Ainhize; Lepage, Mario

    2016-01-15

    Estuaries are subjected to multiple anthropogenic stressors, which have additive, antagonistic or synergistic effects. Current challenges include the use of large databases of biological monitoring surveys (e.g. the European Water Framework Directive) to help environmental managers prioritizing restoration measures. This study investigated the impact of nine stressor categories on the fish ecological status derived from 90 estuaries of the North East Atlantic countries. We used a random forest model to: 1) detect the dominant stressors and their non-linear effects; 2) evaluate the ecological benefits expected from reducing pressure from stressors; and 3) investigate the interactions among stressors. Results showed that largest restoration benefits were expected when mitigating water pollution and oxygen depletion. Non-additive effects represented half of pairwise interactions among stressors, and antagonisms were the most common. Dredged sediments, flow changes and oxygen depletion were predominantly implicated in non-additive interactions, whereas the remainder stressors often showed additive impacts. The prevalence of interactive impacts reflects a complex scenario for estuaries management; hence, we proposed a step-by-step restoration scheme focusing on the mitigation of stressors providing the maximum of restoration benefits under a multi-stress context. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Guiding gate-etch process development using 3D surface reaction modeling for 7nm and beyond

    NASA Astrophysics Data System (ADS)

    Dunn, Derren; Sporre, John R.; Deshpande, Vaibhav; Oulmane, Mohamed; Gull, Ronald; Ventzek, Peter; Ranjan, Alok

    2017-03-01

    Increasingly, advanced process nodes such as 7nm (N7) are fundamentally 3D and require stringent control of critical dimensions over high aspect ratio features. Process integration in these nodes requires a deep understanding of complex physical mechanisms to control critical dimensions from lithography through final etch. Polysilicon gate etch processes are critical steps in several device architectures for advanced nodes that rely on self-aligned patterning approaches to gate definition. These processes are required to meet several key metrics: (a) vertical etch profiles over high aspect ratios; (b) clean gate sidewalls free of etch process residue; (c) minimal erosion of liner oxide films protecting key architectural elements such as fins; and (e) residue free corners at gate interfaces with critical device elements. In this study, we explore how hybrid modeling approaches can be used to model a multi-step finFET polysilicon gate etch process. Initial parts of the patterning process through hardmask assembly are modeled using process emulation. Important aspects of gate definition are then modeled using a particle Monte Carlo (PMC) feature scale model that incorporates surface chemical reactions.1 When necessary, species and energy flux inputs to the PMC model are derived from simulations of the etch chamber. The modeled polysilicon gate etch process consists of several steps including a hard mask breakthrough step (BT), main feature etch steps (ME), and over-etch steps (OE) that control gate profiles at the gate fin interface. An additional constraint on this etch flow is that fin spacer oxides are left intact after final profile tuning steps. A natural optimization required from these processes is to maximize vertical gate profiles while minimizing erosion of fin spacer films.2

  19. Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seedahmed, Gamal H.

    2006-09-01

    Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less

  20. Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method.

    PubMed

    Bengtsson, Henrik; Hössjer, Ola

    2006-03-01

    Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.

  1. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  2. Direct fabrication of bio-inspired gecko-like geometries with vat polymerization additive manufacturing method

    NASA Astrophysics Data System (ADS)

    Davoudinejad, A.; Ribo, M. M.; Pedersen, D. B.; Islam, A.; Tosello, G.

    2018-08-01

    Functional surfaces have proven their potential to solve many engineering problems, attracting great interest among the scientific community. Bio-inspired multi-hierarchical micro-structures grant the surfaces with new properties, such as hydrophobicity, adhesion, unique optical properties and so on. The geometry and fabrication of these surfaces are still under research. In this study, the feasibility of using direct fabrication of microscale features by additive manufacturing (AM) processes was investigated. The investigation was carried out using a specifically designed vat photopolymerization AM machine-tool suitable for precision manufacturing at the micro dimensional scale which has previously been developed, built and validated at the Technical University of Denmark. It was shown that it was possible to replicate a simplified surface inspired by the Tokay gecko, the geometry was previously designed and replicated by a complex multi-step micromanufacturing method extracted from the literature and used as benchmark. Ultimately, the smallest printed features were analyzed by conducting a sensitivity analysis to obtain the righteous parameters in terms of layer thickness and exposure time. Moreover, two more intricate designs were fabricated with the same parameters to assess the surfaces functionality by its wettability. The surface with increased density and decreased feature size showed a water contact angle (CA) of 124°  ±  0.10°, agreeing with the Cassie–Baxter model. These results indicate the possibility of using precision AM for a rapid, easy and reliable fabrication method for functional surfaces.

  3. A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies

    PubMed Central

    Tang, Li

    2014-01-01

    Summary An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this paper, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, e.g., subjects with mental disorders or neurodegenerative diseases such as Parkinson’s as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation. PMID:24033125

  4. Interactive Design Strategy for a Multi-Functional PAMAM Dendrimer-Based Nano-Therapeutic Using Computational Models and Experimental Analysis

    PubMed Central

    Lee, Inhan; Williams, Christopher R.; Athey, Brian D.; Baker, James R.

    2010-01-01

    Molecular dynamics simulations of nano-therapeutics as a final product and of all intermediates in the process of generating a multi-functional nano-therapeutic based on a poly(amidoamine) (PAMAM) dendrimer were performed along with chemical analyses of each of them. The actual structures of the dendrimers were predicted, based on potentiometric titration, gel permeation chromatography, and NMR. The chemical analyses determined the numbers of functional molecules, based on the actual structure of the dendrimer. Molecular dynamics simulations calculated the configurations of the intermediates and the radial distributions of functional molecules, based on their numbers. This interactive process between the simulation results and the chemical analyses provided a further strategy to design the next reaction steps and to gain insight into the products at each chemical reaction step. PMID:20700476

  5. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  6. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  7. Focused-electron-beam-induced processing (FEBIP) for emerging applications in carbon nanoelectronics

    NASA Astrophysics Data System (ADS)

    Fedorov, Andrei G.; Kim, Songkil; Henry, Mathias; Kulkarni, Dhaval; Tsukruk, Vladimir V.

    2014-12-01

    Focused-electron-beam-induced processing (FEBIP), a resist-free additive nanomanufacturing technique, is an actively researched method for "direct-write" processing of a wide range of structural and functional nanomaterials, with high degree of spatial and time-domain control. This article attempts to critically assess the FEBIP capabilities and unique value proposition in the context of processing of electronics materials, with a particular emphasis on emerging carbon (i.e., based on graphene and carbon nanotubes) devices and interconnect structures. One of the major hurdles in advancing the carbon-based electronic materials and device fabrication is a disjoint nature of various processing steps involved in making a functional device from the precursor graphene/CNT materials. Not only this multi-step sequence severely limits the throughput and increases the cost, but also dramatically reduces the processing reproducibility and negatively impacts the quality because of possible between-the-step contamination, especially for impurity-susceptible materials such as graphene. The FEBIP provides a unique opportunity to address many challenges of carbon nanoelectronics, especially when it is employed as part of an integrated processing environment based on multiple "beams" of energetic particles, including electrons, photons, and molecules. This avenue is promising from the applications' prospective, as such a multi-functional (electron/photon/molecule beam) enables one to define shapes (patterning), form structures (deposition/etching), and modify (cleaning/doping/annealing) properties with locally resolved control on nanoscale using the same tool without ever changing the processing environment. It thus will have a direct positive impact on enhancing functionality, improving quality and reducing fabrication costs for electronic devices, based on both conventional CMOS and emerging carbon (CNT/graphene) materials.

  8. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  9. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  10. Evolution of atomic structure during nanoparticle formation

    DOE PAGES

    Tyrsted, Christoffer; Lock, Nina; Jensen, Kirsten M. Ø.; ...

    2014-04-14

    Understanding the mechanism of nanoparticle formation during synthesis is a key prerequisite for the rational design and engineering of desirable materials properties, yet remains elusive due to the difficulty of studying structures at the nanoscale under real conditions. Here, the first comprehensive structural description of the formation of a nanoparticle, yttria-stabilized zirconia (YSZ), all the way from its ionic constituents in solution to the final crystal, is presented. The transformation is a complicated multi-step sequence of atomic reorganizations as the material follows the reaction pathway towards the equilibrium product. Prior to nanoparticle nucleation, reagents reorganize into polymeric species whose structuremore » is incompatible with the final product. Instead of direct nucleation of clusters into the final product lattice, a highly disordered intermediate precipitate forms with a local bonding environment similar to the product yet lacking the correct topology. During maturation, bond reforming occurs by nucleation and growth of distinct domains within the amorphous intermediary. The present study moves beyond kinetic modeling by providing detailed real-time structural insight, and it is demonstrated that YSZ nanoparticle formation and growth is a more complex chemical process than accounted for in conventional models. This level of mechanistic understanding of the nanoparticle formation is the first step towards more rational control over nanoparticle synthesis through control of both solution precursors and reaction intermediaries.« less

  11. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  12. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  13. Establishment of the Relationship between the Photochemical Reflectance Index and Canopy Light Use Efficiency Using Multi-angle Hyperspectral Observations

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Chen, Jing; Zhang, Yongguang; Qiu, Feng; Fan, Weiliang; Ju, Weimin

    2017-04-01

    The gross primary production (GPP) of terrestrial ecosystems constitutes the largest global land carbon flux and exhibits significant spatial and temporal variations. Due to its wide spatial coverage, remote sensing technology is shown to be useful for improving the estimation of GPP in combination with light use efficiency (LUE) models. Accurate estimation of LUE is essential for calculating GPP using remote sensing data and LUE models at regional and global scales. A promising method used for estimating LUE is the photochemical reflectance index (PRI = (R531-R570)/(R531 + R570), where R531 and R570 are reflectance at wavelengths 531 and 570 nm) through remote sensing. However, it has been documented that there are certain issues with PRI at the canopy scale, which need to be considered systematically. For this purpose, an improved tower-based automatic canopy multi-angle hyperspectral observation system was established at the Qianyanzhou flux station in China since January of 2013. In each 15-minute observation cycle, PRI was observed at four view zenith angles fixed at solar zenith angle and (37°, 47°, 57°) or (42°, 52°, 62°) in the azimuth angle range from 45° to 325° (defined from geodetic north). To improve the ability of directional PRI observation to track canopy LUE, the canopy is treated as two-big leaves, i.e. sunlit and shaded leaves. On the basis of a geometrical optical model, the observed canopy reflectance for each view angle is separated to four components, i.e. sunlit and shaded leaves and sunlit and shaded backgrounds. To determine the fractions of these four components at each view angle, three models based on different theories are tested for simulating the fraction of sunlit leaves. Finally, a ratio of canopy reflectance to leaf reflectance is used to represent the fraction of sunlit leaves, and the fraction of shaded leaves is calculated with the four-scale geometrical optical model. Thus, sunlit and shaded PRI are estimated using the least squares regression with multi-angle observations. In both the half-hourly and daily time steps, the canopy-level two-leaf PRI (PRIt) can effectively enhance (>50% and >35%, respectively) the correlation between PRI and LUE derived from the tower flux measurements over the big-leaf PRI (PRIb) taken as the arithmetic average of the multi-angle measurements in a given time interval. PRIt is very effective in detecting the low-moderate drought stress on LUE at half-hourly time steps, while ineffective in detecting severe atmospheric water and heat stresses, which is probably due to alternative radiative energy sink, i.e. photorespiration. Overall, the two-leaf approach well overcomes some external effects (e.g. sun-target-view geometry) that interfere with PRI signals.

  14. Determination of optimum viewing angles for the angular normalization of land surface temperature over vegetated surface.

    PubMed

    Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang

    2015-03-27

    Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors.

  15. Determination of Optimum Viewing Angles for the Angular Normalization of Land Surface Temperature over Vegetated Surface

    PubMed Central

    Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang

    2015-01-01

    Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors. PMID:25825975

  16. Convergence and Extrusion Are Required for Normal Fusion of the Mammalian Secondary Palate

    PubMed Central

    Kim, Seungil; Lewis, Ace E.; Singh, Vivek; Ma, Xuefei; Adelstein, Robert; Bush, Jeffrey O.

    2015-01-01

    The fusion of two distinct prominences into one continuous structure is common during development and typically requires integration of two epithelia and subsequent removal of that intervening epithelium. Using confocal live imaging, we directly observed the cellular processes underlying tissue fusion, using the secondary palatal shelves as a model. We find that convergence of a multi-layered epithelium into a single-layer epithelium is an essential early step, driven by cell intercalation, and is concurrent to orthogonal cell displacement and epithelial cell extrusion. Functional studies in mice indicate that this process requires an actomyosin contractility pathway involving Rho kinase (ROCK) and myosin light chain kinase (MLCK), culminating in the activation of non-muscle myosin IIA (NMIIA). Together, these data indicate that actomyosin contractility drives cell intercalation and cell extrusion during palate fusion and suggest a general mechanism for tissue fusion in development. PMID:25848986

  17. Structure and Function of the Mind bomb E3 ligase in the context of Notch Signal Transduction

    PubMed Central

    Guo, Bingqian; McMillan, Brian J.; Blacklow, Stephen C.

    2016-01-01

    The Notch signaling pathway has a critical role in cell fate determination and tissue homeostasis in a variety of different lineages. In the context of normal Notch signaling, the Notch receptor of the “signal-receiving” cell is activated in trans by a Notch ligand from a neighboring “signal-sending” cell. Genetic studies in several model organisms have established that ubiquitination of the Notch ligand, and its regulated endocytosis, is essential for transmission of this activation signal. In mammals, this ubiquitination step is dependent on the protein Mind bomb 1 (Mib1), a large multi-domain RING-type E3 ligase, and its direct interaction with the intracellular tails of Notch ligand molecules. Here, we discuss our current understanding of Mind bomb structure and mechanism in the context of Notch signaling and beyond. PMID:27285058

  18. Evaluating the Veterans Health Administration's Staffing Methodology Model: A Reliable Approach.

    PubMed

    Taylor, Beth; Yankey, Nicholas; Robinson, Claire; Annis, Ann; Haddock, Kathleen S; Alt-White, Anna; Krein, Sarah L; Sales, Anne

    2015-01-01

    All Veterans Health Administration facilities have been mandated to use a standardized method of determining appropriate direct-care staffing by nursing personnel. A multi-step process was designed to lead to projection of full-time equivalent employees required for safe and effective care across all inpatient units. These projections were intended to develop appropriate budgets for each facility. While staffing levels can be increased, even in facilities subject to budget and personnel caps, doing so requires considerable commitment at all levels of the facility. This commitment must come from front-line nursing personnel to senior leadership, not only in nursing and patient care services, but throughout the hospital. Learning to interpret and rely on data requires a considerable shift in thinking for many facilities, which have relied on historical levels to budget for staffing, but which does not take into account the dynamic character of nursing units and patient need.

  19. The Faceted Discrete Growth and Phase Differentiation During the Directional Solidification of 20SiMnMo5 Steel

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoping; Li, Dianzhong

    2018-07-01

    The microstructures, segregation and cooling curve were investigated in the directional solidification of 20SiMnMo5 steel. The typical characteristic of faceted growth is identified. The microstructures within the single cellular and within the single dendritic arm, together with the contradictive segregation distribution against the cooling curve, verify the discrete crystal growth in multi-scales. Not only the single cellular/dendritic arm but also the single martensite zone within the single cellular/dendritic arm is produced by the discrete growth. In the viewpoint of segregation, the basic domain following continuous growth has not been revealed. Along with the multi-scale faceted discrete growth, the phase differentiation happens for both the solid and liquid. The differentiated liquid phases appear and evolve with different sizes, positions, compositions and durations. The physical mechanism for the faceted discrete growth is qualitatively established based on the nucleation of new faceted steps induced by the composition gradient and temperature gradient.

  20. The Faceted Discrete Growth and Phase Differentiation During the Directional Solidification of 20SiMnMo5 Steel

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoping; Li, Dianzhong

    2018-03-01

    The microstructures, segregation and cooling curve were investigated in the directional solidification of 20SiMnMo5 steel. The typical characteristic of faceted growth is identified. The microstructures within the single cellular and within the single dendritic arm, together with the contradictive segregation distribution against the cooling curve, verify the discrete crystal growth in multi-scales. Not only the single cellular/dendritic arm but also the single martensite zone within the single cellular/dendritic arm is produced by the discrete growth. In the viewpoint of segregation, the basic domain following continuous growth has not been revealed. Along with the multi-scale faceted discrete growth, the phase differentiation happens for both the solid and liquid. The differentiated liquid phases appear and evolve with different sizes, positions, compositions and durations. The physical mechanism for the faceted discrete growth is qualitatively established based on the nucleation of new faceted steps induced by the composition gradient and temperature gradient.

  1. Masticatory biomechanics in the rabbit: a multi-body dynamics analysis.

    PubMed

    Watson, Peter J; Gröning, Flora; Curtis, Neil; Fitton, Laura C; Herrel, Anthony; McCormack, Steven W; Fagan, Michael J

    2014-10-06

    Multi-body dynamics is a powerful engineering tool which is becoming increasingly popular for the simulation and analysis of skull biomechanics. This paper presents the first application of multi-body dynamics to analyse the biomechanics of the rabbit skull. A model has been constructed through the combination of manual dissection and three-dimensional imaging techniques (magnetic resonance imaging and micro-computed tomography). Individual muscles are represented with multiple layers, thus more accurately modelling muscle fibres with complex lines of action. Model validity was sought through comparing experimentally measured maximum incisor bite forces with those predicted by the model. Simulations of molar biting highlighted the ability of the masticatory system to alter recruitment of two muscle groups, in order to generate shearing or crushing movements. Molar shearing is capable of processing a food bolus in all three orthogonal directions, whereas molar crushing and incisor biting are predominately directed vertically. Simulations also show that the masticatory system is adapted to process foods through several cycles with low muscle activations, presumably in order to prevent rapidly fatiguing fast fibres during repeated chewing cycles. Our study demonstrates the usefulness of a validated multi-body dynamics model for investigating feeding biomechanics in the rabbit, and shows the potential for complementing and eventually reducing in vivo experiments.

  2. Masticatory biomechanics in the rabbit: a multi-body dynamics analysis

    PubMed Central

    Watson, Peter J.; Gröning, Flora; Curtis, Neil; Fitton, Laura C.; Herrel, Anthony; McCormack, Steven W.; Fagan, Michael J.

    2014-01-01

    Multi-body dynamics is a powerful engineering tool which is becoming increasingly popular for the simulation and analysis of skull biomechanics. This paper presents the first application of multi-body dynamics to analyse the biomechanics of the rabbit skull. A model has been constructed through the combination of manual dissection and three-dimensional imaging techniques (magnetic resonance imaging and micro-computed tomography). Individual muscles are represented with multiple layers, thus more accurately modelling muscle fibres with complex lines of action. Model validity was sought through comparing experimentally measured maximum incisor bite forces with those predicted by the model. Simulations of molar biting highlighted the ability of the masticatory system to alter recruitment of two muscle groups, in order to generate shearing or crushing movements. Molar shearing is capable of processing a food bolus in all three orthogonal directions, whereas molar crushing and incisor biting are predominately directed vertically. Simulations also show that the masticatory system is adapted to process foods through several cycles with low muscle activations, presumably in order to prevent rapidly fatiguing fast fibres during repeated chewing cycles. Our study demonstrates the usefulness of a validated multi-body dynamics model for investigating feeding biomechanics in the rabbit, and shows the potential for complementing and eventually reducing in vivo experiments. PMID:25121650

  3. Exploitation of Multi-beam Directional Antennas for a Wireless TDMA/FDD MAC

    NASA Astrophysics Data System (ADS)

    Atmaca, Sedat; Ceken, Celal; Erturk, Ismail

    2008-05-01

    The effects of the multi-beam directional antennas on the performance of a new wireless TDMA/FDD MAC system are presented. Directional antennas intrinsically enable development of the SDMA systems and allow transmitting and receiving signals simultaneously at the same time slot. Employing a dynamic slot allocation table at a base station with 4 or 8 sector directional antennas and holding the wireless terminals' location information, a new SDMA/TDMA/FDD frame structure has been developed for wireless communications. The simulation studies realized using OPNET Modeler show that the proposed SDMA/TDMA/FDD system has substantially increased the traditional TDMA/FDD system capacity and provides 1.37 to 4 times better mean delay results when the number of users is increased from 4 to 32 under the same load in the wireless network models.

  4. Polarization-interleave-multiplexed discrete multi-tone modulation with direct detection utilizing MIMO equalization.

    PubMed

    Zhou, Xian; Zhong, Kangping; Gao, Yuliang; Sui, Qi; Dong, Zhenghua; Yuan, Jinhui; Wang, Liang; Long, Keping; Lau, Alan Pak Tao; Lu, Chao

    2015-04-06

    Discrete multi-tone (DMT) modulation is an attractive modulation format for short-reach applications to achieve the best use of available channel bandwidth and signal noise ratio (SNR). In order to realize polarization-multiplexed DMT modulation with direct detection, we derive an analytical transmission model for dual polarizations with intensity modulation and direct diction (IM-DD) in this paper. Based on the model, we propose a novel polarization-interleave-multiplexed DMT modulation with direct diction (PIM-DMT-DD) transmission system, where the polarization de-multiplexing can be achieved by using a simple multiple-input-multiple-output (MIMO) equalizer and the transmission performance is optimized over two distinct received polarization states to eliminate the singularity issue of MIMO demultiplexing algorithms. The feasibility and effectiveness of the proposed PIM-DMT-DD system are investigated via theoretical analyses and simulation studies.

  5. Formation of secondary inorganic aerosols by power plant emissions exhausted through cooling towers in Saxony.

    PubMed

    Hinneburg, Detlef; Renner, Eberhard; Wolke, Ralf

    2009-01-01

    The fraction of ambient PM10 that is due to the formation of secondary inorganic particulate sulfate and nitrate from the emissions of two large, brown-coal-fired power stations in Saxony (East Germany) is examined. The power stations are equipped with natural-draft cooling towers. The flue gases are directly piped into the cooling towers, thereby receiving an additionally intensified uplift. The exhausted gas-steam mixture contains the gases CO, CO2, NO, NO2, and SO2, the directly emitted primary particles, and additionally, an excess of 'free' sulfate ions in water solution, which, after the desulfurization steps, remain non-neutralized by cations. The precursor gases NO2 and SO2 are capable of forming nitric and sulfuric acid by several pathways. The acids can be neutralized by ammonia and generate secondary particulate matter by heterogeneous condensation on preexisting particles. The simulations are performed by a nested and multi-scale application of the online-coupled model system LM-MUSCAT. The Local Model (LM; recently renamed as COSMO) of the German Weather Service performs the meteorological processes, while the Multi-scale Atmospheric Transport Model (MUSCAT) includes the transport, the gas phase chemistry, as well as the aerosol chemistry (thermodynamic ammonium-sulfate-nitrate-water system). The highest horizontal resolution in the inner region of Saxony is 0.7 km. One summer and one winter episode, each realizing 5 weeks of the year 2002, are simulated twice, with the cooling tower emissions switched on and off, respectively. This procedure serves to identify the direct and indirect influences of the single plumes on the formation and distribution of the secondary inorganic aerosols. Surface traces of the individual tower plumes can be located and distinguished, especially in the well-mixed boundary layer in daytime. At night, the plumes are decoupled from the surface. In no case does the resulting contribution of the cooling tower emissions to PM10 significantly exceed 15 microg m(-3) at the surface. These extreme values are obtained in narrow plumes on intensive summer conditions, whereas different situations with lower turbulence (night, winter) remain below this value. About 90% of the PM10 concentrations in the plumes are secondarily formed sulfate, mainly ammonium sulfate, and about 10% originate from the primarily emitted particles. Under the assumptions made, ammonium nitrate plays a rather marginal role. The analyzed results depend on the specific emission data of power plants with flue gas emissions piped through the cooling towers. The emitted fraction of 'free' sulfate ions remaining in excess after the desulfurization steps plays an important role at the formation of secondary aerosols and therefore has to be measured carefully.

  6. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  7. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  8. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  9. Automatic segmentation of the liver using multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho

    2012-02-01

    We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.

  10. Accurate quantum Z rotations with less magic

    NASA Astrophysics Data System (ADS)

    Landahl, Andrew; Cesare, Chris

    2013-03-01

    We present quantum protocols for executing arbitrarily accurate π /2k rotations of a qubit about its Z axis. Unlike reduced instruction set computing (RISC) protocols which use a two-step process of synthesizing high-fidelity ``magic'' states from which T = Z (π / 4) gates can be teleported and then compiling a sequence of adaptive stabilizer operations and T gates to approximate Z (π /2k) , our complex instruction set computing (CISC) protocol distills magic states for the Z (π /2k) gates directly. Replacing this two-step process with a single step results in substantial reductions in the number of gates needed. The key to our construction is a family of shortened quantum Reed-Muller codes of length 2 k + 2 - 1 , whose distillation threshold shrinks with k but is greater than 0.85% for k <= 6 . AJL and CC were supported in part by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. The Impact of Alcoholics Anonymous on other substance abuse related Twelve Step programs

    PubMed Central

    Laudet, Alexandre B.

    2008-01-01

    This chapter explores the influence of the AA model on self-help fellowships addressing problems of drug dependence. Fellowships that have adapted the 12-step recovery model to other substances of abuse are reviewed; next similarities and differences between AA and drug-recovery 12-step organizations are examined; finally, we present empirical findings on patterns of attendance and perceptions of AA and Narcotics Anonymous (NA) among polydrug dependent populations, many of whom are cross-addicted to alcohol. Future directions in 12-step research are noted in closing. PMID:19115764

  12. Measurement of intrahepatic pressure during radiofrequency ablation in porcine liver.

    PubMed

    Kawamoto, Chiaki; Yamauchi, Atsushi; Baba, Yoko; Kaneko, Keiko; Yakabi, Koji

    2010-04-01

    To identify the most effective procedures to avoid increased intrahepatic pressure during radiofrequency ablation, we evaluated different ablation methods. Laparotomy was performed in 19 pigs. Intrahepatic pressure was monitored using an invasive blood pressure monitor. Radiofrequency ablation was performed as follows: single-step standard ablation; single-step at 30 W; single-step at 70 W; 4-step at 30 W; 8-step at 30 W; 8-step at 70 W; and cooled-tip. The array was fully deployed in single-step methods. In the multi-step methods, the array was gradually deployed in four or eight steps. With the cooled-tip, ablation was performed by increasing output by 10 W/min, starting at 40 W. Intrahepatic pressure was as follows: single-step standard ablation, 154.5 +/- 30.9 mmHg; single-step at 30 W, 34.2 +/- 20.0 mmHg; single-step at 70 W, 46.7 +/- 24.3 mmHg; 4-step at 30 W, 42.3 +/- 17.9 mmHg; 8-step at 30 W, 24.1 +/- 18.2 mmHg; 8-step at 70 W, 47.5 +/- 31.5 mmHg; and cooled-tip, 114.5 +/- 16.6 mmHg. The radiofrequency ablation-induced area was spherical with single-step standard ablation, 4-step at 30 W, and 8-step at 30 W. Conversely, the ablated area was irregular with single-step at 30 W, single-step at 70 W, and 8-step at 70 W. The ablation time was significantly shorter for the multi-step method than for the single-step method. Increased intrahepatic pressure could be controlled using multi-step methods. From the shapes of the ablation area, 30-W 8-step expansions appear to be most suitable for radiofrequency ablation.

  13. Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation

    NASA Astrophysics Data System (ADS)

    Hecquet, Pascal

    2018-04-01

    In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.

  14. Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber

    NASA Technical Reports Server (NTRS)

    Routh, D. E.; Sharma, G. C. (Inventor)

    1982-01-01

    The processing of wafer devices to form multilevel interconnects for microelectronic circuits is described. The method is directed to performing the sequential steps of etching the via, removing the photo resist pattern, back sputtering the entire wafer surface and depositing the next layer of interconnect material under common vacuum conditions without exposure to atmospheric conditions. Apparatus for performing the method includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a DC magnetron sputtering system. A gas inlet is provided in the chamber for the introduction of various gases to the vacuum chamber and the creation of various gas plasma during the sputtering steps.

  15. A novel two-step procedure to expand Sca-1+ cells clonally

    PubMed Central

    Tang, Yao Liang; Shen, Leping; Qian, Keping; Phillips, M. Ian

    2007-01-01

    Resident cardiac stem cells (CSCs) are characterized by their capacity to self-renew in culture, and are multi-potent for forming normal cell types in hearts. CSCs were originally isolated directly from enzymatically digested hearts using stem cell markers. However, long exposure to enzymatic digestion can affect the integrity of stem cell markers on the cell surface, and also compromise stem cell function. Alternatively resident CSCs can migrate from tissue explant and form cardiospheres in culture. However, fibroblast contamination can easily occur during CSC culture. To avoid these problems, we developed a two-step procedure by growing the cells before selecting the Sca1+ cells and culturing in cardiac fibroblast conditioned medium, they avoid fibroblast overgrowth. PMID:17577582

  16. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  17. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE PAGES

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; ...

    2016-04-25

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  18. Petascale computation of multi-physics seismic simulations

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.

  19. Regional Development Impacts Multi-Regional - Multi-Industry Model (MRMI) Users Manual,

    DTIC Science & Technology

    1982-09-01

    indicators, described in Chapter 2, are estimated as well. Finally, MRMI is flexible, as it can incorporate alternative macroeconomic , national inter...national and regional economic contexts and data sources for estimating macroeconomic and direct impacts data. Considerations for ensuring consistency...Chapter 4 is devoted to model execution and the interpretation of its output. As MRMI forecasts are based upon macroeconomic , national inter-industry

  20. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  1. GPS Satellite Orbit Prediction at User End for Real-Time PPP System.

    PubMed

    Yang, Hongzhou; Gao, Yang

    2017-08-30

    This paper proposed the high-precision satellite orbit prediction process at the user end for the real-time precise point positioning (PPP) system. Firstly, the structure of a new real-time PPP system will be briefly introduced in the paper. Then, the generation of satellite initial parameters (IP) at the sever end will be discussed, which includes the satellite position, velocity, and the solar radiation pressure (SRP) parameters for each satellite. After that, the method for orbit prediction at the user end, with dynamic models including the Earth's gravitational force, lunar gravitational force, solar gravitational force, and the SRP, are presented. For numerical integration, both the single-step Runge-Kutta and multi-step Adams-Bashforth-Moulton integrator methods are implemented. Then, the comparison between the predicted orbit and the international global navigation satellite system (GNSS) service (IGS) final products are carried out. The results show that the prediction accuracy can be maintained for several hours, and the average prediction error of the 31 satellites are 0.031, 0.032, and 0.033 m for the radial, along-track and cross-track directions over 12 h, respectively. Finally, the PPP in both static and kinematic modes are carried out to verify the accuracy of the predicted satellite orbit. The average root mean square error (RMSE) for the static PPP of the 32 globally distributed IGS stations are 0.012, 0.015, and 0.021 m for the north, east, and vertical directions, respectively; while the RMSE of the kinematic PPP with the predicted orbit are 0.031, 0.069, and 0.167 m in the north, east and vertical directions, respectively.

  2. GPS Satellite Orbit Prediction at User End for Real-Time PPP System

    PubMed Central

    Yang, Hongzhou; Gao, Yang

    2017-01-01

    This paper proposed the high-precision satellite orbit prediction process at the user end for the real-time precise point positioning (PPP) system. Firstly, the structure of a new real-time PPP system will be briefly introduced in the paper. Then, the generation of satellite initial parameters (IP) at the sever end will be discussed, which includes the satellite position, velocity, and the solar radiation pressure (SRP) parameters for each satellite. After that, the method for orbit prediction at the user end, with dynamic models including the Earth’s gravitational force, lunar gravitational force, solar gravitational force, and the SRP, are presented. For numerical integration, both the single-step Runge–Kutta and multi-step Adams–Bashforth–Moulton integrator methods are implemented. Then, the comparison between the predicted orbit and the international global navigation satellite system (GNSS) service (IGS) final products are carried out. The results show that the prediction accuracy can be maintained for several hours, and the average prediction error of the 31 satellites are 0.031, 0.032, and 0.033 m for the radial, along-track and cross-track directions over 12 h, respectively. Finally, the PPP in both static and kinematic modes are carried out to verify the accuracy of the predicted satellite orbit. The average root mean square error (RMSE) for the static PPP of the 32 globally distributed IGS stations are 0.012, 0.015, and 0.021 m for the north, east, and vertical directions, respectively; while the RMSE of the kinematic PPP with the predicted orbit are 0.031, 0.069, and 0.167 m in the north, east and vertical directions, respectively. PMID:28867771

  3. A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders

    NASA Astrophysics Data System (ADS)

    Stamatis, D.; Ermoline, A.; Dreizin, E. L.

    2012-12-01

    A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.

  4. A comparative theoretical study of the catalytic activities of Au2(-) and AuAg(-) dimers for CO oxidation.

    PubMed

    Liu, Peng; Song, Ke; Zhang, Dongju; Liu, Chengbu

    2012-05-01

    The detailed mechanisms of catalytic CO oxidation over Au(2)(-) and AuAg(-) dimers, which represent the simplest models for monometal Au and bimetallic Au-Ag nanoparticles, have been studied by performing density functional theory calculations. It is found that both Au(2)(-) and AuAg(-) dimers catalyze the reaction according to the similar mono-center Eley-Rideal mechanism. The catalytic reaction is of the multi-channel and multi-step characteristic, which can proceed along four possible pathways via two or three elementary steps. In AuAg(-), the Au site is more active than the Ag site, and the calculated energy barrier values for the rate-determining step of the Au-site catalytic reaction are remarkably smaller than those for both the Ag-site catalytic reaction and the Au(2)(-) catalytic reaction. The better catalytic activity of bimetallic AuAg(-) dimer is attributed to the synergistic effect between Au and Ag atom. The present results provide valuable information for understanding the higher catalytic activity of Au-Ag nanoparticles and nanoalloys for low-temperature CO oxidation than either pure metallic catalyst.

  5. A Microsoft Project-Based Planning, Tracking, and Management Tool for the National Transonic Facility's Model Changeover Process

    NASA Technical Reports Server (NTRS)

    Vairo, Daniel M.

    1998-01-01

    The removal and installation of sting-mounted wind tunnel models in the National Transonic Facility (NTF) is a multi-task process having a large impact on the annual throughput of the facility. Approximately ten model removal and installation cycles occur annually at the NTF with each cycle requiring slightly over five days to complete. The various tasks of the model changeover process were modeled in Microsoft Project as a template to provide a planning, tracking, and management tool. The template can also be used as a tool to evaluate improvements to this process. This document describes the development of the template and provides step-by-step instructions on its use and as a planning and tracking tool. A secondary role of this document is to provide an overview of the model changeover process and briefly describe the tasks associated with it.

  6. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  7. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  8. Requirements for multi-level systems pharmacology models to reach end-usage: the case of type 2 diabetes.

    PubMed

    Nyman, Elin; Rozendaal, Yvonne J W; Helmlinger, Gabriel; Hamrén, Bengt; Kjellsson, Maria C; Strålfors, Peter; van Riel, Natal A W; Gennemark, Peter; Cedersund, Gunnar

    2016-04-06

    We are currently in the middle of a major shift in biomedical research: unprecedented and rapidly growing amounts of data may be obtained today, from in vitro, in vivo and clinical studies, at molecular, physiological and clinical levels. To make use of these large-scale, multi-level datasets, corresponding multi-level mathematical models are needed, i.e. models that simultaneously capture multiple layers of the biological, physiological and disease-level organization (also referred to as quantitative systems pharmacology-QSP-models). However, today's multi-level models are not yet embedded in end-usage applications, neither in drug research and development nor in the clinic. Given the expectations and claims made historically, this seemingly slow adoption may seem surprising. Therefore, we herein consider a specific example-type 2 diabetes-and critically review the current status and identify key remaining steps for these models to become mainstream in the future. This overview reveals how, today, we may use models to ask scientific questions concerning, e.g., the cellular origin of insulin resistance, and how this translates to the whole-body level and short-term meal responses. However, before these multi-level models can become truly useful, they need to be linked with the capabilities of other important existing models, in order to make them 'personalized' (e.g. specific to certain patient phenotypes) and capable of describing long-term disease progression. To be useful in drug development, it is also critical that the developed models and their underlying data and assumptions are easily accessible. For clinical end-usage, in addition, model links to decision-support systems combined with the engagement of other disciplines are needed to create user-friendly and cost-efficient software packages.

  9. A KLM-circuit model of a multi-layer transducer for acoustic bladder volume measurements.

    PubMed

    Merks, E J W; Borsboom, J M G; Bom, N; van der Steen, A F W; de Jong, N

    2006-12-22

    In a preceding study a new technique to non-invasively measure the bladder volume on the basis of non-linear wave propagation was validated. It was shown that the harmonic level generated at the posterior bladder wall increases for larger bladder volumes. A dedicated transducer is needed to further verify and implement this approach. This transducer must be capable of both transmission of high-pressure waves at fundamental frequency and reception of up to the third harmonic. For this purpose, a multi-layer transducer was constructed using a single element PZT transducer for transmission and a PVDF top-layer for reception. To determine feasibility of the multi-layer concept for bladder volume measurements, and to ensure optimal performance, an equivalent mathematical model on the basis of KLM-circuit modeling was generated. This model was obtained in two subsequent steps. Firstly, the PZT transducer was modeled without PVDF-layer attached by means of matching the model with the measured electrical input impedance. It was validated using pulse-echo measurements. Secondly, the model was extended with the PVDF-layer. The total model was validated by considering the PVDF-layer as a hydrophone on the PZT transducer surface and comparing the measured and simulated PVDF responses on a wave transmitted by the PZT transducer. The obtained results indicated that a valid model for the multi-layer transducer was constructed. The model showed feasibility of the multi-layer concept for bladder volume measurements. It also allowed for further optimization with respect to electrical matching and transmit waveform. Additionally, the model demonstrated the effect of mechanical loading of the PVDF-layer on the PZT transducer.

  10. CFD-ACE+: a CAD system for simulation and modeling of MEMS

    NASA Astrophysics Data System (ADS)

    Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha

    1999-03-01

    Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.

  11. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  12. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    PubMed

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  13. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics

    PubMed Central

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-01-01

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876

  14. The design of a multi-harmonic step-tunable gyrotron

    NASA Astrophysics Data System (ADS)

    Qi, Xiang-Bo; Du, Chao-Hai; Zhu, Juan-Feng; Pan, Shi; Liu, Pu-Kun

    2017-03-01

    The theoretical study of a step-tunable gyrotron controlled by successive excitation of multi-harmonic modes is presented in this paper. An axis-encircling electron beam is employed to eliminate the harmonic mode competition. Physics images are depicted to elaborate the multi-harmonic interaction mechanism in determining the operating parameters at which arbitrary harmonic tuning can be realized by magnetic field sweeping to achieve controlled multiband frequencies' radiation. An important principle is revealed that a weak coupling coefficient under a high-harmonic interaction can be compensated by a high Q-factor. To some extent, the complementation between the high Q-factor and weak coupling coefficient makes the high-harmonic mode potential to achieve high efficiency. Based on a previous optimized magnetic cusp gun, the multi-harmonic step-tunable gyrotron is feasible by using harmonic tuning of first-to-fourth harmonic modes. Multimode simulation shows that the multi-harmonic gyrotron can operate on the 34 GHz first-harmonic TE11 mode, 54 GHz second-harmonic TE21 mode, 74 GHz third-harmonic TE31 mode, and 94 GHz fourth-harmonic TE41 mode, corresponding to peak efficiencies of 28.6%, 35.7%, 17.1%, and 11.4%, respectively. The multi-harmonic step-tunable gyrotron provides new possibilities in millimeter-terahertz source development especially for advanced terahertz applications.

  15. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    PubMed

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.

  16. Science to Manage a Very Rare Fish in a Very Large River - Pallid Sturgeon in the Missouri River, U.S.A.

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; Colvin, M. E.; Marmorek, D.; Randall, M.

    2017-12-01

    The Missouri River Recovery Program (MRRP) seeks to revise river-management strategies to avoid jeopardizing the existence of three species: pallid sturgeon (Scaphirhynchus albus), interior least tern (Sterna antillarum)), and piping plover (Charadrius melodus). Managing the river to maintain populations of the two birds (terns and plovers) is relatively straightforward: reproductive success can be modeled with some certainty as a direct, increasing function of exposed sandbar area. In contrast, the pallid sturgeon inhabits the benthic zone of a deep, turbid river and many parts of its complex life history are not directly observable. Hence, pervasive uncertainties exist about what factors are limiting population growth and what management actions may reverse population declines. These uncertainties are being addressed by the MRRP through a multi-step process. The first step was an Effects Analysis (EA), which: documented what is known and unknown about the river and the species; documented quality and quantity of existing information; used an expert-driven process to develop conceptual ecological models and to prioritize management hypotheses; and developed quantitative models linking management actions (flows, channel reconfigurations, and stocking) to population responses. The EA led to development of a science and adaptive-management plan with prioritized allocation of investment among 4 levels of effort ranging from fundamental research to full implementation. The plan includes learning from robust, hypothesis-driven effectiveness monitoring for all actions, with statistically sound experimental designs, multiple metrics, and explicit decision criteria to guide management. Finally, the science plan has been fully integrated with a new adaptive-management structure that links science to decision makers. The reinvigorated investment in science stems from the understanding that costly river-management decisions are not socially or politically supportable without better understanding of how this endangered fish will respond. While some hypotheses can be evaluated without actually implementing management actions in the river, assessing the effectiveness of other forms of habitat restoration requires in-river implementation within a rigorous experimental design.

  17. Modeling and Grid Generation of Iced Airfoils

    NASA Technical Reports Server (NTRS)

    Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Hackenberg, Anthony W.; Pennline, James A.; Schilling, Herbert W.

    2007-01-01

    SmaggIce Version 2.0 is a software toolkit for geometric modeling and grid generation for two-dimensional, singleand multi-element, clean and iced airfoils. A previous version of SmaggIce was described in Preparing and Analyzing Iced Airfoils, NASA Tech Briefs, Vol. 28, No. 8 (August 2004), page 32. To recapitulate: Ice shapes make it difficult to generate quality grids around airfoils, yet these grids are essential for predicting ice-induced complex flow. This software efficiently creates high-quality structured grids with tools that are uniquely tailored for various ice shapes. SmaggIce Version 2.0 significantly enhances the previous version primarily by adding the capability to generate grids for multi-element airfoils. This version of the software is an important step in streamlining the aeronautical analysis of ice airfoils using computational fluid dynamics (CFD) tools. The user may prepare the ice shape, define the flow domain, decompose it into blocks, generate grids, modify/divide/merge blocks, and control grid density and smoothness. All these steps may be performed efficiently even for the difficult glaze and rime ice shapes. Providing the means to generate highly controlled grids near rough ice, the software includes the creation of a wrap-around block (called the "viscous sublayer block"), which is a thin, C-type block around the wake line and iced airfoil. For multi-element airfoils, the software makes use of grids that wrap around and fill in the areas between the viscous sub-layer blocks for all elements that make up the airfoil. A scripting feature records the history of interactive steps, which can be edited and replayed later to produce other grids. Using this version of SmaggIce, ice shape handling and grid generation can become a practical engineering process, rather than a laborious research effort.

  18. Bypass Diode Temperature Tests of a Solar Array Coupon Under Space Thermal Environment Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie; Wu, Gordon

    2016-01-01

    Tests were performed on a 56-cell Advanced Triple Junction solar array coupon whose purpose was to determine margin available for bypass diodes integrated with new, large multi-junction solar cells that are manufactured from a 4-inch wafer. The tests were performed under high vacuum with coupon back side thermal conditions of both cold and ambient. The bypass diodes were subjected to a sequence of increasing discrete current steps from 0 Amp to 2.0 Amp in steps of 0.25 Amp. At each current step, a temperature measurement was obtained via remote viewing by an infrared camera. This paper discusses the experimental methodology, experiment results, and the thermal model.

  19. Distributed micro-radar system for detection and tracking of low-profile, low-altitude targets

    NASA Astrophysics Data System (ADS)

    Gorwara, Ashok; Molchanov, Pavlo

    2016-05-01

    Proposed airborne surveillance radar system can detect, locate, track, and classify low-profile, low-altitude targets: from traditional fixed and rotary wing aircraft to non-traditional targets like unmanned aircraft systems (drones) and even small projectiles. Distributed micro-radar system is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. To extend high frequency limit and provide high sensitivity over the broadband of frequencies, multiple angularly spaced directional antennas are coupled with front end circuits and separately connected to a direction finder processor by a digital interface. Integration of antennas with front end circuits allows to exclude waveguide lines which limits system bandwidth and creates frequency dependent phase errors. Digitizing of received signals proximate to antennas allows loose distribution of antennas and dramatically decrease phase errors connected with waveguides. Accuracy of direction finding in proposed micro-radar in this case will be determined by time accuracy of digital processor and sampling frequency. Multi-band, multi-functional antennas can be distributed around the perimeter of a Unmanned Aircraft System (UAS) and connected to the processor by digital interface or can be distributed between swarm/formation of mini/micro UAS and connected wirelessly. Expendable micro-radars can be distributed by perimeter of defense object and create multi-static radar network. Low-profile, lowaltitude, high speed targets, like small projectiles, create a Doppler shift in a narrow frequency band. This signal can be effectively filtrated and detected with high probability. Proposed micro-radar can work in passive, monostatic or bistatic regime.

  20. Full multi grid method for electric field computation in point-to-plane streamer discharge in air at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.

    2012-01-01

    Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.

  1. Synthesis and optical properties of core-multi-shell CdSe/CdS/ZnS quantum dots: Surface modifications

    NASA Astrophysics Data System (ADS)

    Ratnesh, R. K.; Mehata, Mohan Singh

    2017-02-01

    We report two port synthesis of CdSe/CdS/ZnS core-multi-shell quantum dots (Q-dots) and their structural properties. The multi-shell structures of Q-dots were developed by using successive ionic layer adsorption and reaction (SILAR) technique. The obtained Q-dots show high crystallinity with the step-wise adjustment of lattice parameters in the radial direction. The size of the core and core-shell Q-dots estimated by transmission electron microscopy images and absorption spectra is about 3.4 and 5.3 nm, respectively. The water soluble Q-dots (scheme-1) were prepared by using ligand exchange method, and the effect of pH was discussed regarding the variation of quantum yield (QY). The decrease of a lifetime of core-multi-shell Q-dots with respect to core CdSe indicates that the shell growth may be tuned by the lifetimes. Thus, the study clearly demonstrates that the core-shell approach can be used to substantially improve the optical properties of Q-dots desired for various applications.

  2. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  3. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  4. Examination of Wildland Fire Spread at Small Scales Using Direct Numerical Simulations and High-Speed Laser Diagnostics

    NASA Astrophysics Data System (ADS)

    Wimer, N. T.; Mackoweicki, A. S.; Poludnenko, A. Y.; Hoffman, C.; Daily, J. W.; Rieker, G. B.; Hamlington, P.

    2017-12-01

    Results are presented from a joint computational and experimental research effort focused on understanding and characterizing wildland fire spread at small scales (roughly 1m-1mm) using direct numerical simulations (DNS) with chemical kinetics mechanisms that have been calibrated using data from high-speed laser diagnostics. The simulations are intended to directly resolve, with high physical accuracy, all small-scale fluid dynamic and chemical processes relevant to wildland fire spread. The high fidelity of the simulations is enabled by the calibration and validation of DNS sub-models using data from high-speed laser diagnostics. These diagnostics have the capability to measure temperature and chemical species concentrations, and are used here to characterize evaporation and pyrolysis processes in wildland fuels subjected to an external radiation source. The chemical kinetics code CHEMKIN-PRO is used to study and reduce complex reaction mechanisms for water removal, pyrolysis, and gas phase combustion during solid biomass burning. Simulations are then presented for a gaseous pool fire coupled with the resulting multi-step chemical reaction mechanisms, and the results are connected to the fundamental structure and spread of wildland fires. It is anticipated that the combined computational and experimental approach of this research effort will provide unprecedented access to information about chemical species, temperature, and turbulence during the entire pyrolysis, evaporation, ignition, and combustion process, thereby permitting more complete understanding of the physics that must be represented by coarse-scale numerical models of wildland fire spread.

  5. [Theoretical modeling and experimental research on direct compaction characteristics of multi-component pharmaceutical powders based on the Kawakita equation].

    PubMed

    Si, Guo-Ning; Chen, Lan; Li, Bao-Guo

    2014-04-01

    Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.

  6. Modelling multi-species interactions in the Barents Sea ecosystem with special emphasis on minke whales and their interactions with cod, herring and capelin

    NASA Astrophysics Data System (ADS)

    Lindstrøm, Ulf; Smout, Sophie; Howell, Daniel; Bogstad, Bjarte

    2009-10-01

    The Barents Sea ecosystem, one of the most productive and commercially important ecosystems in the world, has experienced major fluctuations in species abundance the past five decades. Likely causes are natural variability, climate change, overfishing and predator-prey interactions. In this study, we use an age-length structured multi-species model (Gadget, Globally applicable Area-Disaggregated General Ecosystem Toolbox) to analyse the historic population dynamics of major fish and marine mammal species in the Barents Sea. The model was used to examine possible effects of a number of plausible biological and fisheries scenarios. The results suggest that changes in cod mortality from fishing or cod cannibalism levels have the largest effect on the ecosystem, while changes to the capelin fishery have had only minor effects. Alternate whale migration scenarios had only a moderate impact on the modelled ecosystem. Indirect effects are seen to be important, with cod fishing pressure, cod cannibalism and whale predation on cod having an indirect impact on capelin, emphasising the importance of multi-species modelling in understanding and managing ecosystems. Models such as the one presented here provide one step towards an ecosystem-based approach to fisheries management.

  7. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  8. QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Henderson, D. K.

    1996-01-01

    A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.

  9. Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains

    NASA Astrophysics Data System (ADS)

    Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.

    2004-07-01

    Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.

  10. Biomimetic surface structuring using cylindrical vector femtosecond laser beams

    NASA Astrophysics Data System (ADS)

    Skoulas, Evangelos; Manousaki, Alexandra; Fotakis, Costas; Stratakis, Emmanuel

    2017-03-01

    We report on a new, single-step and scalable method to fabricate highly ordered, multi-directional and complex surface structures that mimic the unique morphological features of certain species found in nature. Biomimetic surface structuring was realized by exploiting the unique and versatile angular profile and the electric field symmetry of cylindrical vector (CV) femtosecond (fs) laser beams. It is shown that, highly controllable, periodic structures exhibiting sizes at nano-, micro- and dual- micro/nano scales can be directly written on Ni upon line and large area scanning with radial and azimuthal polarization beams. Depending on the irradiation conditions, new complex multi-directional nanostructures, inspired by the Shark’s skin morphology, as well as superhydrophobic dual-scale structures mimicking the Lotus’ leaf water repellent properties can be attained. It is concluded that the versatility and features variations of structures formed is by far superior to those obtained via laser processing with linearly polarized beams. More important, by exploiting the capabilities offered by fs CV fields, the present technique can be further extended to fabricate even more complex and unconventional structures. We believe that our approach provides a new concept in laser materials processing, which can be further exploited for expanding the breadth and novelty of applications.

  11. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  12. Multi-topic assignment for exploratory navigation of consumer health information in NetWellness using formal concept analysis.

    PubMed

    Cui, Licong; Xu, Rong; Luo, Zhihui; Wentz, Susan; Scarberry, Kyle; Zhang, Guo-Qiang

    2014-08-03

    Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F₁ measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness' original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics.

  13. Multi-topic assignment for exploratory navigation of consumer health information in NetWellness using formal concept analysis

    PubMed Central

    2014-01-01

    Background Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. Methods We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Results Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F1 measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness’ original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Conclusion Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics. PMID:25086916

  14. Multislice spiral CT simulator for dynamic cardiopulmonary studies

    NASA Astrophysics Data System (ADS)

    De Francesco, Silvia; Ferreira da Silva, Augusto M.

    2002-04-01

    We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.

  15. Multi-Skyrmions on AdS2 × S2, rational maps and popcorn transitions

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Tallarita, Gianni

    2017-08-01

    By combining two different techniques to construct multi-soliton solutions of the (3 + 1)-dimensional Skyrme model, the generalized hedgehog and the rational map ansatz, we find multi-Skyrmion configurations in AdS2 ×S2. We construct Skyrmionic multi-layered configurations such that the total Baryon charge is the product of the number of kinks along the radial AdS2 direction and the degree of the rational map. We show that, for fixed total Baryon charge, as one increases the charge density on ∂ (AdS2 ×S2) , it becomes increasingly convenient energetically to have configurations with more peaks in the radial AdS2 direction but a lower degree of the rational map. This has a direct relation with the so-called holographic popcorn transitions in which, when the charge density is high, multi-layered configurations with low charge on each layer are favored over configurations with few layers but with higher charge on each layer. The case in which the geometry is M2 ×S2 can also be analyzed.

  16. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  17. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  18. Highly durable direct hydrazine hydrate anion exchange membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Sakamoto, Tomokazu; Serov, Alexey; Masuda, Teruyuki; Kamakura, Masaki; Yoshimoto, Koji; Omata, Takuya; Kishi, Hirofumi; Yamaguchi, Susumu; Hori, Akihiro; Horiuchi, Yousuke; Terada, Tomoaki; Artyushkova, Kateryna; Atanassov, Plamen; Tanaka, Hirohisa

    2018-01-01

    The factors influenced on degradation of direct hydrazine hydrate fuel cells (DHFCs) under operation conditions are analyzed by in situ soft X-ray radiography. A durability of DHFCs is significantly improved by multi-step reaction DHFCs (MSR-DHFCs) approach designed to decrease the crossover of liquid fuel. An open circuit voltage (OCV) as well as cell voltage at 5 mA cm-2 of MSR-DHFC construct with commercial anion exchange membrane (AEM) maintained for over of 3500 h at 60 °C. Furthermore, the commercial proton exchange membrane (PEM) is integrated into AEM of MSR-DHFCs resulting in stable power output of MSR-DHFCs for over than 2800 h at 80 °C.

  19. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  20. Applying the Brakes to Multi-Site SR Protein Phosphorylation: Substrate-Induced Effects on the Splicing Kinase SRPK1†

    PubMed Central

    Aubol, Brandon E.; Adams, Joseph A.

    2011-01-01

    To investigate how a protein kinase interacts with its protein substrate during extended, multi-site phosphorylation, the kinetic mechanism of a protein kinase involved in mRNA splicing control was investigated using rapid quench flow techniques. The protein kinase SRPK1 phosphorylates approximately 10 serines in the arginine-serine-rich domain (RS domain) of the SR protein SRSF1 in a C-to-N-terminal direction, a modification that directs this essential splicing factor from the cytoplasm to the nucleus. Transient-state kinetic experiments illustrate that the first phosphate is added rapidly onto the RS domain of SRSF1 (t1/2 = 0.1 sec) followed by slower, multi-site phosphorylation at the remaining serines (t1/2 = 15 sec). Mutagenesis experiments suggest that efficient phosphorylation rates are maintained by an extensive hydrogen bonding and electrostatic network between the RS domain of the SR protein and the active site and docking groove of the kinase. Catalytic trapping and viscosometric experiments demonstrate that while the phosphoryl transfer step is fast, ADP release limits multi-site phosphorylation. By studying phosphate incorporation into selectively pre-phosphorylated forms of the enzyme-substrate complex, the kinetic mechanism for site-specific phosphorylation along the reaction coordinate was assessed. The binding affinity of the SR protein, the phosphoryl transfer rate and ADP exchange rate were found to decline significantly as a function of progressive phosphorylation in the RS domain. These findings indicate that the protein substrate actively modulates initiation, extension and termination events associated with prolonged, multi-site phosphorylation. PMID:21728354

  1. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  2. No control genes required: Bayesian analysis of qRT-PCR data.

    PubMed

    Matz, Mikhail V; Wright, Rachel M; Scott, James G

    2013-01-01

    Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.

  3. Probabilistic inversion of AVO seismic data for reservoir properties and related uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zunino, Andrea; Mosegaard, Klaus

    2017-04-01

    Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.

  4. Scaling dimensions in spectroscopy of soil and vegetation

    NASA Astrophysics Data System (ADS)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.

  5. Measurement needs guided by synthetic radar scans in high-resolution model output

    NASA Astrophysics Data System (ADS)

    Varble, A.; Nesbitt, S. W.; Borque, P.

    2017-12-01

    Microphysical and dynamical process interactions within deep convective clouds are not well understood, partly because measurement strategies often focus on statistics of cloud state rather than cloud processes. While processes cannot be directly measured, they can be inferred with sufficiently frequent and detailed scanning radar measurements focused on the life cycleof individual cloud regions. This is a primary goal of the 2018-19 DOE ARM Cloud, Aerosol, and Complex Terrain Interactions (CACTI) and NSF Remote sensing of Electrification, Lightning, And Mesoscale/microscale Processes with Adaptive Ground Observations (RELAMPAGO) field campaigns in central Argentina, where orographic deep convective initiation is frequent with some high-impact systems growing into the tallest and largest in the world. An array of fixed and mobile scanning multi-wavelength dual-polarization radars will be coupled with surface observations, sounding systems, multi-wavelength vertical profilers, and aircraft in situ measurements to characterize convective cloud life cycles and their relationship with environmental conditions. While detailed cloud processes are an observational target, the radar scan patterns that are most ideal for observing them are unclear. They depend on the locations and scales of key microphysical and dynamical processes operating within the cloud. High-resolution simulations of clouds, while imperfect, can provide information on these locations and scales that guide radar measurement needs. Radar locations are set in the model domain based on planned experiment locations, and simulatedorographic deep convective initiation and upscale growth are sampled using a number of different scans involving RHIs or PPIs with predefined elevation and azimuthal angles that approximately conform with radar range and beam width specifications. Each full scan pattern is applied to output atsingle model time steps with time step intervals that depend on the length of time required to complete each scan in the real world. The ability of different scans to detect key processes within the convective cloud life cycle are examined in connection with previous and subsequent dynamical and microphysical transitions. This work will guide strategic scan patterns that will be used during CACTI and RELAMPAGO.

  6. Investigation of multi-scale flash-weakening of rock surfaces during high speed slip

    NASA Astrophysics Data System (ADS)

    Barbery, M. R.; Saber, O.; Chester, F. M.; Chester, J. S.

    2017-12-01

    A significant reduction in the coefficient of friction of rock can occur if sliding velocity approaches seismic rates as a consequence of weakening of microscopic sliding contacts by flash heating. Using a high-acceleration and -speed biaxial apparatus equipped with a high-speed Infra-Red (IR) camera to capture thermographs of the sliding surface, we have documented the heterogeneous distribution of temperature on flash-heated decimetric surfaces characterized by linear arrays of high-temperature, mm-size spots, and streaks. Numerical models that are informed by the character of flash heated surfaces and that consider the coupling of changes in temperature and changes in the friction of contacts, supports the hypothesis that independent mechanisms of flash weakening operate at different contact scales. Here, we report on new experiments that provide additional constraints on the life-times and rest-times of populations of millimeter-scale contacts. Rock friction experiments conducted on Westerly granite samples in a double-direct shear configuration achieve velocity steps from 1 mm/s to 900 mm/s at 100g accelerations over 2 mm of displacement with normal stresses of 22-36 MPa and 30 mm of displacement during sustained high-speed sliding. Sliding surfaces are machined to roughness similar to natural fault surfaces and that allow us to control the characteristics of millimeter-scale contact populations. Thermographs of the sliding surface show temperatures up to 200 C on millimeter-scale contacts, in agreement with 1-D heat conduction model estimates of 180 C. Preliminary comparison of thermal modeling results and experiment observations demonstrate that we can distinguish the different life-times and rest-times of contacts in thermographs and the corresponding frictional weakening behaviors. Continued work on machined surfaces that lead to different contact population characteristics will be used to test the multi-scale and multi-mechanism hypothesis for flash weakening during seismic slip on rough fault surfaces.

  7. Competitive dynamics of lexical innovations in multi-layer networks

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2014-04-01

    We study the introduction of lexical innovations into a community of language users. Lexical innovations, i.e. new term added to people's vocabulary, plays an important role in the process of language evolution. Nowadays, information is spread through a variety of networks, including, among others, online and offline social networks and the World Wide Web. The entire system, comprising networks of different nature, can be represented as a multi-layer network. In this context, lexical innovations diffusion occurs in a peculiar fashion. In particular, a lexical innovation can undergo three different processes: its original meaning is accepted; its meaning can be changed or misunderstood (e.g. when not properly explained), hence more than one meaning can emerge in the population. Lastly, in the case of a loan word, it can be translated into the population language (i.e. defining a new lexical innovation or using a synonym) or into a dialect spoken by part of the population. Therefore, lexical innovations cannot be considered simply as information. We develop a model for analyzing this scenario using a multi-layer network comprising a social network and a media network. The latter represents the set of all information systems of a society, e.g. television, the World Wide Web and radio. Furthermore, we identify temporal directed edges between the nodes of these two networks. In particular, at each time-step, nodes of the media network can be connected to randomly chosen nodes of the social network and vice versa. In doing so, information spreads through the whole system and people can share a lexical innovation with their neighbors or, in the event they work as reporters, by using media nodes. Lastly, we use the concept of "linguistic sign" to model lexical innovations, showing its fundamental role in the study of these dynamics. Many numerical simulations have been performed to analyze the proposed model and its outcomes.

  8. A Systems Approach towards an Intelligent and Self-Controlling Platform for Integrated Continuous Reaction Sequences**

    PubMed Central

    Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V

    2015-01-01

    Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747

  9. Space Photovoltaic Research and Technology, 1989

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Remarkable progress on a wide variety of approaches in space photovoltaics, for both near and far term applications is reported. Papers were presented in a variety of technical areas, including multi-junction cell technology, GaAs and InP cells, system studies, cell and array development, and non-solar direct conversion. Five workshops were held to discuss the following topics: mechanical versus monolithic multi-junction cells; strategy in space flight experiments; non-solar direct conversion; indium phosphide cells; and space cell theory and modeling.

  10. Single-step affinity purification of enzyme biotherapeutics: a platform methodology for accelerated process development.

    PubMed

    Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean

    2014-01-01

    Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.

  11. A new indicator framework for quantifying the intensity of the terrestrial water cycle

    NASA Astrophysics Data System (ADS)

    Huntington, Thomas G.; Weiskel, Peter K.; Wolock, David M.; McCabe, Gregory J.

    2018-04-01

    A quantitative framework for characterizing the intensity of the water cycle over land is presented, and illustrated using a spatially distributed water-balance model of the conterminous United States (CONUS). We approach water cycle intensity (WCI) from a landscape perspective; WCI is defined as the sum of precipitation (P) and actual evapotranspiration (AET) over a spatially explicit landscape unit of interest, averaged over a specified time period (step) of interest. The time step may be of any length for which data or simulation results are available (e.g., sub-daily to multi-decadal). We define the storage-adjusted runoff (Q‧) as the sum of actual runoff (Q) and the rate of change in soil moisture storage (ΔS/Δt, positive or negative) during the time step of interest. The Q‧ indicator is demonstrated to be mathematically complementary to WCI, in a manner that allows graphical interpretation of their relationship. For the purposes of this study, the indicators were demonstrated using long-term, spatially distributed model simulations with an annual time step. WCI was found to increase over most of the CONUS between the 1945 to 1974 and 1985 to 2014 periods, driven primarily by increases in P. In portions of the western and southeastern CONUS, Q‧ decreased because of decreases in Q and soil moisture storage. Analysis of WCI and Q‧ at temporal scales ranging from sub-daily to multi-decadal could improve understanding of the wide spectrum of hydrologic responses that have been attributed to water cycle intensification, as well as trends in those responses.

  12. A fate model for nitrogen dynamics in the Scheldt basin

    NASA Astrophysics Data System (ADS)

    Haest, Pieter Jan; van der Kwast, Johannes; Broekx, Steven; Seuntjens, Piet

    2010-05-01

    The European Union (EU) adopted the Water Framework Directive (WFD) in 2000 ensuring that all aquatic ecosystems meet ‘good ecological status' by 2015. However, the large population density in combination with agricultural and industrial activities in some European river basins pose challenges for river basin managers in meeting this status. The EU financed AQUAREHAB project (FP7) specifically examines the ecological and economic impact of innovative rehabilitation technologies for multi-pressured degraded waters. For this purpose, a numerical spatio-temporal model is developed to evaluate innovative technologies versus conventional measures at the river basin scale. The numerical model describes the nitrogen dynamics in the Scheldt river basin. Nitrogen is examined since nitrate is of specific concern in Belgium, the country comprising the largest area of the Scheldt basin. The Scheldt basin encompasses 20000 km2 and houses over 10 million people. The governing factors describing nitrogen fluxes at this large scale differ from the field scale with a larger uncertainty on input data. As such, the environmental modeling language PCRaster was selected since it was found to provide a balance between process descriptions and necessary input data. The resulting GIS-based model simulates the nitrogen dynamics in the Scheldt basin with a yearly time step and a spatial resolution of 1 square kilometer. A smaller time step is being evaluated depending on the description of the hydrology. The model discerns 4 compartments in the Scheldt basin: the soil, shallow groundwater, deep groundwater and the river network. Runoff and water flow occurs along the steepest slope in all model compartments. Diffuse emissions and direct inputs are calculated from administrative and statistical data. These emissions are geographically defined or are distributed over the domain according to land use and connectivity to the sewer system. The reactive mass transport is described using literature data. Process-knowledge on the innovative rehabilitation technologies, i.e. wetlands and riparian zones, will be derived from lab and field scale experiments. Datasets provided at the EU level are used to calibrate the model when available. The fate model will be used to create a database driven Decision Support System (DSS) in which costs of measures and ecotoxicological effects are considered. The DSS can then be used to compare alternative combinations of rehabilitation technologies versus conventional measures in the Scheldt river basin taking into account the ecological status of the river basin.

  13. Micromechanical modeling of short glass-fiber reinforced thermoplastics-Isotropic damage of pseudograins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kammoun, S.; Brassart, L.; Doghri, I.

    A micromechanical damage modeling approach is presented to predict the overall elasto-plastic behavior and damage evolution in short fiber reinforced composite materials. The practical use of the approach is for injection molded thermoplastic parts reinforced with short glass fibers. The modeling is proceeded as follows. The representative volume element is decomposed into a set of pseudograins, the damage of which affects progressively the overall stiffness and strength up to total failure. Each pseudograin is a two-phase composite with aligned inclusions having same aspect ratio. A two-step mean-field homogenization procedure is adopted. In the first step, the pseudograins are homogenized individuallymore » according to the Mori-Tanaka scheme. The second step consists in a self-consistent homogenization of homogenized pseudograins. An isotropic damage model is applied at the pseudograin level. The model is implemented as a UMAT in the finite element code ABAQUS. Model is shown to reproduce the strength and the anisotropy (Lankford coefficient) during uniaxial tensile tests on samples cut under different directions relative to the injection flow direction.« less

  14. Micromechanical modeling of short glass-fiber reinforced thermoplastics-Isotropic damage of pseudograins

    NASA Astrophysics Data System (ADS)

    Kammoun, S.; Brassart, L.; Robert, G.; Doghri, I.; Delannay, L.

    2011-05-01

    A micromechanical damage modeling approach is presented to predict the overall elasto-plastic behavior and damage evolution in short fiber reinforced composite materials. The practical use of the approach is for injection molded thermoplastic parts reinforced with short glass fibers. The modeling is proceeded as follows. The representative volume element is decomposed into a set of pseudograins, the damage of which affects progressively the overall stiffness and strength up to total failure. Each pseudograin is a two-phase composite with aligned inclusions having same aspect ratio. A two-step mean-field homogenization procedure is adopted. In the first step, the pseudograins are homogenized individually according to the Mori-Tanaka scheme. The second step consists in a self-consistent homogenization of homogenized pseudograins. An isotropic damage model is applied at the pseudograin level. The model is implemented as a UMAT in the finite element code ABAQUS. Model is shown to reproduce the strength and the anisotropy (Lankford coefficient) during uniaxial tensile tests on samples cut under different directions relative to the injection flow direction.

  15. Analysing UK clinicians' understanding of cognitive symptoms in major depression: A survey of primary care physicians and psychiatrists.

    PubMed

    McAllister-Williams, R Hamish; Bones, Kate; Goodwin, Guy M; Harrison, John; Katona, Cornelius; Rasmussen, Jill; Strong, Sarah; Young, Allan H

    2017-01-01

    Cognitive dysfunction occurs in depression and can persist into remission. It impacts on patient functioning but remains largely unrecognised, unmonitored and untreated. We explored understanding of cognitive dysfunction in depression among UK clinicians. A multi-step consultation process. Step 1: a multi-stakeholder steering committee identified key themes of burden, detection and management of cognitive dysfunction in depression, and developed statements on each to explore understanding and degree of agreement among clinicians. Step 2: 100 general practitioners (GPs) and 100 psychiatrists indicated their level of agreement with these statements. Step 3: the steering committee reviewed responses and highlighted priority areas for future education and research. There was agreement that clinicians are not fully aware of cognitive dysfunction in depression. Views of the relationship between cognitive dysfunction and other depressive symptom severities was not consistent with the literature. In particular, there was a lack of recognition that some cognitive dysfunction can persist into remission. There was understandable uncertainty around treatment options, given the current limited evidence base. However, it was recognised that cognitive dysfunction is an area of unmet need and that there is a lack of objective tests of cognition appropriate for depressed patients that can be easily implemented in the clinic. Respondents are likely to be 'led' by the direction of the statements they reviewed. The study did not involve patients and carers. UK clinicians should undergo training regarding cognitive dysfunction in depression, and further research is needed into its assessment, treatment and monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    NASA Astrophysics Data System (ADS)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  17. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  18. Exact diagonalization of quantum lattice models on coprocessors

    NASA Astrophysics Data System (ADS)

    Siro, T.; Harju, A.

    2016-10-01

    We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.

  19. Linking multi-temporal satellite imagery to coastal wetland dynamics and bird distribution

    USGS Publications Warehouse

    Pickens, Bradley A.; King, Sammy L.

    2014-01-01

    Ecosystems are characterized by dynamic ecological processes, such as flooding and fires, but spatial models are often limited to a single measurement in time. The characterization of direct, fine-scale processes affecting animals is potentially valuable for management applications, but these are difficult to quantify over broad extents. Direct predictors are also expected to improve transferability of models beyond the area of study. Here, we investigated the ability of non-static and multi-temporal habitat characteristics to predict marsh bird distributions, while testing model generality and transferability between two coastal habitats. Distribution models were developed for king rail (Rallus elegans), common gallinule (Gallinula galeata), least bittern (Ixobrychus exilis), and purple gallinule (Porphyrio martinica) in fresh and intermediate marsh types in the northern Gulf Coast of Louisiana and Texas, USA. For model development, repeated point count surveys of marsh birds were conducted from 2009 to 2011. Landsat satellite imagery was used to quantify both annual conditions and cumulative, multi-temporal habitat characteristics. We used multivariate adaptive regression splines to quantify bird-habitat relationships for fresh, intermediate, and combined marsh habitats. Multi-temporal habitat characteristics ranked as more important than single-date characteristics, as temporary water was most influential in six of eight models. Predictive power was greater for marsh type-specific models compared to general models and model transferability was poor. Birds in fresh marsh selected for annual habitat characterizations, while birds in intermediate marsh selected for cumulative wetness and heterogeneity. Our findings emphasize that dynamic ecological processes can affect species distribution and species-habitat relationships may differ with dominant landscape characteristics.

  20. RF plasma modeling of the Linac4 H- ion source

    NASA Astrophysics Data System (ADS)

    Mattei, S.; Ohta, M.; Hatayama, A.; Lettry, J.; Kawamura, Y.; Yasumoto, M.; Schmitzer, C.

    2013-02-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H- ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The employment of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  1. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    PubMed

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  3. Perry's Scheme of Intellectual and Epistemological Development as a Framework for Describing Student Difficulties in Learning Organic Chemistry

    ERIC Educational Resources Information Center

    Grove, Nathaniel P.; Bretz, Stacey Lowery

    2010-01-01

    We have investigated student difficulties with the learning of organic chemistry. Using Perry's Model of Intellectual Development as a framework revealed that organic chemistry students who function as dualistic thinkers struggle with the complexity of the subject matter. Understanding substitution/elimination reactions and multi-step syntheses is…

  4. Probabilities and Predictions: Modeling the Development of Scientific Problem-Solving Skills

    ERIC Educational Resources Information Center

    Stevens, Ron; Johnson, David F.; Soller, Amy

    2005-01-01

    The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative…

  5. Radiometric calibration of spacecraft using small lunar images

    USGS Publications Warehouse

    Kieffer, Hugh H.; Anderson, James M.; Becker, Kris J.

    1999-01-01

    In this study, the data reduction steps that can be used to extract the lunar irradiance from low resolution images of the Moon are examined and the attendant uncertainties are quantitatively assessed. The response integrated over an image is compared to a lunar irradiance model being developed from terrestrial multi-band photometric observations over the 350-2500 nm range.

  6. Modeling Humans as Reinforcement Learners: How to Predict Human Behavior in Multi-Stage Games

    NASA Technical Reports Server (NTRS)

    Lee, Ritchie; Wolpert, David H.; Backhaus, Scott; Bent, Russell; Bono, James; Tracey, Brendan

    2011-01-01

    This paper introduces a novel framework for modeling interacting humans in a multi-stage game environment by combining concepts from game theory and reinforcement learning. The proposed model has the following desirable characteristics: (1) Bounded rational players, (2) strategic (i.e., players account for one anothers reward functions), and (3) is computationally feasible even on moderately large real-world systems. To do this we extend level-K reasoning to policy space to, for the first time, be able to handle multiple time steps. This allows us to decompose the problem into a series of smaller ones where we can apply standard reinforcement learning algorithms. We investigate these ideas in a cyber-battle scenario over a smart power grid and discuss the relationship between the behavior predicted by our model and what one might expect of real human defenders and attackers.

  7. Embarked electrical network robust control based on singular perturbation model.

    PubMed

    Abdeljalil Belhaj, Lamya; Ait-Ahmed, Mourad; Benkhoris, Mohamed Fouad

    2014-07-01

    This paper deals with an approach of modelling in view of control for embarked networks which can be described as strongly coupled multi-sources, multi-loads systems with nonlinear and badly known characteristics. This model has to be representative of the system behaviour and easy to handle for easy regulators synthesis. As a first step, each alternator is modelled and linearized around an operating point and then it is subdivided into two lower order systems according to the singular perturbation theory. RST regulators are designed for each subsystem and tested by means of a software test-bench which allows predicting network behaviour in both steady and transient states. Finally, the designed controllers are implanted on an experimental benchmark constituted by two alternators supplying loads in order to test the dynamic performances in realistic conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Deadbeat Predictive Controllers

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1997-01-01

    Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.

  9. Stochastic modelling of animal movement.

    PubMed

    Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M

    2010-07-27

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.

  10. Multi-Decadal Coastal Behavioural States From A Fusion Of Geohistorical Conceptual Modelling With 2-D Morphodynamic Modelling

    NASA Astrophysics Data System (ADS)

    Goodwin, I. D.; Mortlock, T.

    2016-02-01

    Geohistorical archives of shoreline and foredune planform geometry provides a unique evidence-based record of the time integral response to coupled directional wave climate and sediment supply variability on annual to multi-decadal time scales. We develop conceptual shoreline modelling from the geohistorical shoreline archive using a novel combination of methods, including: LIDAR DEM and field mapping of coastal geology; a decadal-scale climate reconstruction of sea-level pressure, marine windfields, and paleo-storm synoptic type and frequency, and historical bathymetry. The conceptual modelling allows for the discrimination of directional wave climate shifts and the relative contributions of cross-shore and along-shore sand supply rates at multi-decadal resolution. We present regional examples from south-eastern Australia over a large latitudinal gradient from subtropical Queensland (S 25°) to mid-latitude Bass Strait (S 40°) that illustrate the morphodynamic evolution and reorganization to wave climate change. We then use the conceptual modeling to inform a two-dimensional coupled spectral wave-hydrodynamic-morphodynamic model to investigate the shoreface response to paleo-directional wind and wave climates. Unlike one-line shoreline modelling, this fully dynamical approach allows for the investigation of cumulative and spatial bathymetric change due to wave-induced currents, as well as proxy-shoreline change. The fusion of the two modeling approaches allows for: (i) the identification of the natural range of coastal planform geometries in response to wave climate shifts; and, (ii) the decomposition of the multidecadal coastal change into the cross-shore and along-shore sand supply drivers, according to the best-matching planforms.

  11. Changes in yields and their variability at different levels of global warming

    NASA Astrophysics Data System (ADS)

    Childers, Katelin

    2015-04-01

    An assessment of climate change impacts at different levels of global warming is crucial to inform the political discussion about mitigation targets as well as for the inclusion of climate change impacts in Integrated Assessment Models (IAMs) that generally only provide global mean temperature change as an indicator of climate change. While there is a well-established framework for the scalability of regional temperature and precipitation changes with global mean temperature change we provide an assessment of the extent to which impacts such as crop yield changes can also be described in terms of global mean temperature changes without accounting for the specific underlying emissions scenario. Based on multi-crop-model simulations of the four major cereal crops (maize, rice, soy, and wheat) on a 0.5 x 0.5 degree global grid generated within ISI-MIP, we show the average spatial patterns of projected crop yield changes at one half degree warming steps. We find that emissions scenario dependence is a minor component of the overall variance of projected yield changes at different levels of global warming. Furthermore, scenario dependence can be reduced by accounting for the direct effects of CO2 fertilization in each global climate model (GCM)/impact model combination through an inclusion of the global atmospheric CO2 concentration as a second predictor. The choice of GCM output used to force the crop model simulations accounts for a slightly larger portion of the total yield variance, but the greatest contributor to variance in both global and regional crop yields and at all levels of warming, is the inter-crop-model spread. The unique multi impact model ensemble available with ISI-MIP data also indicates that the overall variability of crop yields is projected to increase in conjunction with increasing global mean temperature. This result is consistent throughout the ensemble of impact models and across many world regions. Such a hike in yield volatility could have significant policy implications by affecting food prices and supplies.

  12. Recent progress on understanding the mechanisms of amyloid nucleation.

    PubMed

    Chatani, Eri; Yamamoto, Naoki

    2018-04-01

    Amyloid fibrils are supramolecular protein assemblies with a fibrous morphology and cross-β structure. The formation of amyloid fibrils typically follows a nucleation-dependent polymerization mechanism, in which a one-step nucleation scheme has widely been accepted. However, a variety of oligomers have been identified in early stages of fibrillation, and a nucleated conformational conversion (NCC) mechanism, in which oligomers serve as a precursor of amyloid nucleation and convert to amyloid nuclei, has been proposed. This development has raised the need to consider more complicated multi-step nucleation processes in addition to the simplest one-step process, and evidence for the direct involvement of oligomers as nucleation precursors has been obtained both experimentally and theoretically. Interestingly, the NCC mechanism has some analogy with the two-step nucleation mechanism proposed for inorganic and organic crystals and protein crystals, although a more dramatic conformational conversion of proteins should be considered in amyloid nucleation. Clarifying the properties of the nucleation precursors of amyloid fibrils in detail, in comparison with those of crystals, will allow a better understanding of the nucleation of amyloid fibrils and pave the way to develop techniques to regulate it.

  13. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  14. Ultramap: the all in One Photogrammetric Solution

    NASA Astrophysics Data System (ADS)

    Wiechert, A.; Gruber, M.; Karner, K.

    2012-07-01

    This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.

  15. Modeling DNA Replication.

    ERIC Educational Resources Information Center

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  16. Shape coexistence and the role of axial asymmetry in 72Ge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayangeakaa, A. D.; Janssens, R. F.; Wu, C. Y.

    2016-01-22

    The quadrupole collectivity of low-lying states and the anomalous behavior of the0 + 2 and 2 + 3 levels in 72Ge are investigated via projectile multi-step Coulomb excitation with GRETINA and CHICO-2. A total of forty six E2 and M1 matrix elements connecting fourteen low-lying levels were determined using the least-squares search code, GOSIA. Evidence for triaxiality and shape coexistence, based on the model-independent shape invariants deduced from the Kumar–Cline sum rule, is presented. Moreover, these are interpreted using a simple two-state mixing model as well as multi-state mixing calculations carried out within the framework of the triaxial rotor model.more » Our results represent a significant milestone towards the understanding of the unusual structure of this nucleus.« less

  17. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling

    PubMed Central

    Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.

    2016-01-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003

  18. Estimation of Survival Probabilities for Use in Cost-effectiveness Analyses: A Comparison of a Multi-state Modeling Survival Analysis Approach with Partitioned Survival and Markov Decision-Analytic Modeling.

    PubMed

    Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H

    2017-05-01

    Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.

  19. Effect of a perturbation-based balance training program on compensatory stepping and grasping reactions in older adults: a randomized controlled trial.

    PubMed

    Mansfield, Avril; Peters, Amy L; Liu, Barbara A; Maki, Brian E

    2010-04-01

    Compensatory stepping and grasping reactions are prevalent responses to sudden loss of balance and play a critical role in preventing falls. The ability to execute these reactions effectively is impaired in older adults. The purpose of this study was to evaluate a perturbation-based balance training program designed to target specific age-related impairments in compensatory stepping and grasping balance recovery reactions. This was a double-blind randomized controlled trial. The study was conducted at research laboratories in a large urban hospital. Thirty community-dwelling older adults (aged 64-80 years) with a recent history of falls or self-reported instability participated in the study. Participants were randomly assigned to receive either a 6-week perturbation-based (motion platform) balance training program or a 6-week control program involving flexibility and relaxation training. Features of balance reactions targeted by the perturbation-based program were: (1) multi-step reactions, (2) extra lateral steps following anteroposterior perturbations, (3) foot collisions following lateral perturbations, and (4) time to complete grasping reactions. The reactions were evoked during testing by highly unpredictable surface translation and cable pull perturbations, both of which differed from the perturbations used during training. /b> Compared with the control program, the perturbation-based training led to greater reductions in frequency of multi-step reactions and foot collisions that were statistically significant for surface translations but not cable pulls. The perturbation group also showed significantly greater reduction in handrail contact time compared with the control group for cable pulls and a possible trend in this direction for surface translations. Further work is needed to determine whether a maintenance program is needed to retain the training benefits and to assess whether these benefits reduce fall risk in daily life. Perturbation-based training shows promise as an effective intervention to improve the ability of older adults to prevent themselves from falling when they lose their balance.

  20. Method to Improve Indium Bump Bonding via Indium Oxide Removal Using a Multi-Step Plasma Process

    NASA Technical Reports Server (NTRS)

    Dickie, Matthew R. (Inventor); Nikzad, Shouleh (Inventor); Greer, H. Frank (Inventor); Jones, Todd J. (Inventor); Vasquez, Richard P. (Inventor); Hoenk, Michael E. (Inventor)

    2012-01-01

    A process for removing indium oxide from indium bumps in a flip-chip structure to reduce contact resistance, by a multi-step plasma treatment. A first plasma treatment of the indium bumps with an argon, methane and hydrogen plasma reduces indium oxide, and a second plasma treatment with an argon and hydrogen plasma removes residual organics. The multi-step plasma process for removing indium oxide from the indium bumps is more effective in reducing the oxide, and yet does not require the use of halogens, does not change the bump morphology, does not attack the bond pad material or under-bump metallization layers, and creates no new mechanisms for open circuits.

  1. Multi-layer membrane model for mass transport in a direct ethanol fuel cell using an alkaline anion exchange membrane

    NASA Astrophysics Data System (ADS)

    Bahrami, Hafez; Faghri, Amir

    2012-11-01

    A one-dimensional, isothermal, single-phase model is presented to investigate the mass transport in a direct ethanol fuel cell incorporating an alkaline anion exchange membrane. The electrochemistry is analytically solved and the closed-form solution is provided for two limiting cases assuming Tafel expressions for both oxygen reduction and ethanol oxidation. A multi-layer membrane model is proposed to properly account for the diffusive and electroosmotic transport of ethanol through the membrane. The fundamental differences in fuel crossover for positive and negative electroosmotic drag coefficients are discussed. It is found that ethanol crossover is significantly reduced upon using an alkaline anion exchange membrane instead of a proton exchange membrane, especially at current densities higher than 500 A m

  2. A computational kinetic model of diffusion for molecular systems.

    PubMed

    Teo, Ivan; Schulten, Klaus

    2013-09-28

    Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.

  3. Network design and analysis for multi-enzyme biocatalysis.

    PubMed

    Blaß, Lisa Katharina; Weyler, Christian; Heinzle, Elmar

    2017-08-10

    As more and more biological reaction data become available, the full exploration of the enzymatic potential for the synthesis of valuable products opens up exciting new opportunities but is becoming increasingly complex. The manual design of multi-step biosynthesis routes involving enzymes from different organisms is very challenging. To harness the full enzymatic potential, we developed a computational tool for the directed design of biosynthetic production pathways for multi-step catalysis with in vitro enzyme cascades, cell hydrolysates and permeabilized cells. We present a method which encompasses the reconstruction of a genome-scale pan-organism metabolic network, path-finding and the ranking of the resulting pathway candidates for proposing suitable synthesis pathways. The network is based on reaction and reaction pair data from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the thermodynamics calculator eQuilibrator. The pan-organism network is especially useful for finding the most suitable pathway to a target metabolite from a thermodynamic or economic standpoint. However, our method can be used with any network reconstruction, e.g. for a specific organism. We implemented a path-finding algorithm based on a mixed-integer linear program (MILP) which takes into account both topology and stoichiometry of the underlying network. Unlike other methods we do not specify a single starting metabolite, but our algorithm searches for pathways starting from arbitrary start metabolites to a target product of interest. Using a set of biochemical ranking criteria including pathway length, thermodynamics and other biological characteristics such as number of heterologous enzymes or cofactor requirement, it is possible to obtain well-designed meaningful pathway alternatives. In addition, a thermodynamic profile, the overall reactant balance and potential side reactions as well as an SBML file for visualization are generated for each pathway alternative. We present an in silico tool for the design of multi-enzyme biosynthetic production pathways starting from a pan-organism network. The method is highly customizable and each module can be adapted to the focus of the project at hand. This method is directly applicable for (i) in vitro enzyme cascades, (ii) cell hydrolysates and (iii) permeabilized cells.

  4. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2016-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  5. Limits to the Extraction of Information from Multi-Hop Skywave Radar Signals

    DTIC Science & Technology

    2005-04-14

    equations to compute the eikonal rays gh a model ionosphere, plotting the resulting tories in the range-height plane. oes received via these multi...kilometres. This extensive database is ideally suited to the sta- tistical analysis of the directional, diurnal, seasonal 0 0 500 1000 1500 2000 2500

  6. Rule Driven Multi-Objective Management (RDMOM) - An Alternative Form for Describing and Developing Effective Water Resources Management Strategies

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2011-12-01

    Economics provides a model for describing human behavior applied to the management of water resources, but that model assumes, among other things, that managers have a way of directly relating immediate actions to long-term economic outcomes. This is rarely the case in water resources problems where uncertainty has significant impacts on the effectiveness of management strategies and where the management objectives are very difficult to commensurate. The difficulty in using economics is even greater in multiparty disputes, where each party has a different relative value for each of the management objectives, and many of the management objectives are shared. A three step approach to collaborative decision making can overcome these difficulties. The first step involves creating science based performance measures and evaluation tools to estimate the effect of alternative management strategies on each of the non-commensurate objectives. The second step involves developing short-term surrogate operating objectives that implicitly deal with all of the aspects of the long term uncertainty. Management that continually "optimizes" the short-term objectives subject to physical and other constraints that change through time can be characterized as Rule Driven Multi-Objective Management (RDMOM). RDMOM strategies are then tested in simulation models to provide the basis for evaluating performance measures. Participants in the collaborative process then engage in multiparty discussions that create new alternatives, and "barter" a deal. RDMOM does not assume that managers fully understand the link between current actions and long term goals. Rather, it assumes that managers operate to achieve short-term surrogate objectives which they believe will achieve an appropriate balance of both short and long-term incommensurable benefits. A reservoir rule curve is a simple, but often not particularly effective, example of the real-world implementation of RDMOM. Water managers find they can easily describe and explain their written and unwritten protocols using the RDMOM, and that the use of short-term surrogates is both intellectually appealing and pragmatic. The identification of operating targets as short-term surrogates leads naturally to a critical discussion of long-term objectives, and to the development of performance measures for the long-term objectives. The transparency and practical feasibility RDMOM based strategies is often crucial to the success of collaborative efforts. Complex disputes in the Delaware and Susquehanna Basins, the Everglades and Lower East Coast South Florida, Southern Nevada, Washington DC and many others have been resolved using RDMOM strategies.

  7. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    NASA Astrophysics Data System (ADS)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  8. Aging effect on step adjustments and stability control in visually perturbed gait initiation.

    PubMed

    Sun, Ruopeng; Cui, Chuyi; Shea, John B

    2017-10-01

    Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. NGSCheckMate: software for validating sample identity in next-generation sequencing studies within and across data types.

    PubMed

    Lee, Sejoon; Lee, Soohyun; Ouellette, Scott; Park, Woong-Yang; Lee, Eunjung A; Park, Peter J

    2017-06-20

    In many next-generation sequencing (NGS) studies, multiple samples or data types are profiled for each individual. An important quality control (QC) step in these studies is to ensure that datasets from the same subject are properly paired. Given the heterogeneity of data types, file types and sequencing depths in a multi-dimensional study, a robust program that provides a standardized metric for genotype comparisons would be useful. Here, we describe NGSCheckMate, a user-friendly software package for verifying sample identities from FASTQ, BAM or VCF files. This tool uses a model-based method to compare allele read fractions at known single-nucleotide polymorphisms, considering depth-dependent behavior of similarity metrics for identical and unrelated samples. Our evaluation shows that NGSCheckMate is effective for a variety of data types, including exome sequencing, whole-genome sequencing, RNA-seq, ChIP-seq, targeted sequencing and single-cell whole-genome sequencing, with a minimal requirement for sequencing depth (>0.5X). An alignment-free module can be run directly on FASTQ files for a quick initial check. We recommend using this software as a QC step in NGS studies. https://github.com/parklab/NGSCheckMate. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Bidding-based autonomous process planning and scheduling

    NASA Astrophysics Data System (ADS)

    Gu, Peihua; Balasubramanian, Sivaram; Norrie, Douglas H.

    1995-08-01

    Improving productivity through computer integrated manufacturing systems (CIMS) and concurrent engineering requires that the islands of automation in an enterprise be completely integrated. The first step in this direction is to integrate design, process planning, and scheduling. This can be achieved through a bidding-based process planning approach. The product is represented in a STEP model with detailed design and administrative information including design specifications, batch size, and due dates. Upon arrival at the manufacturing facility, the product registered in the shop floor manager which is essentially a coordinating agent. The shop floor manager broadcasts the product's requirements to the machines. The shop contains autonomous machines that have knowledge about their functionality, capabilities, tooling, and schedule. Each machine has its own process planner and responds to the product's request in a different way that is consistent with its capabilities and capacities. When more than one machine offers certain process(es) for the same requirements, they enter into negotiation. Based on processing time, due date, and cost, one of the machines wins the contract. The successful machine updates its schedule and advises the product to request raw material for processing. The concept was implemented using a multi-agent system with the task decomposition and planning achieved through contract nets. The examples are included to illustrate the approach.

  11. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  12. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  13. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  14. Process Model of A Fusion Fuel Recovery System for a Direct Drive IFE Power Reactor

    NASA Astrophysics Data System (ADS)

    Natta, Saswathi; Aristova, Maria; Gentile, Charles

    2008-11-01

    A task has been initiated to develop a detailed representative model for the fuel recovery system (FRS) in the prospective direct drive inertial fusion energy (IFE) reactor. As part of the conceptual design phase of the project, a chemical process model is developed in order to observe the interaction of system components. This process model is developed using FEMLAB Multiphysics software with the corresponding chemical engineering module (CEM). Initially, the reactants, system structure, and processes are defined using known chemical species of the target chamber exhaust. Each step within the Fuel recovery system is modeled compartmentally and then merged to form the closed loop fuel recovery system. The output, which includes physical properties and chemical content of the products, is analyzed after each step of the system to determine the most efficient and productive system parameters. This will serve to attenuate possible bottlenecks in the system. This modeling evaluation is instrumental in optimizing and closing the fusion fuel cycle in a direct drive IFE power reactor. The results of the modeling are presented in this paper.

  15. Deployment of a multi-link flexible structure

    NASA Astrophysics Data System (ADS)

    Na, Kyung-Su; Kim, Ji-Hwan

    2006-06-01

    Deployment of a multi-link beam structure undergoing locking is analyzed in the Timoshenko beam theory. In the modeling of the system, dynamic forces are assumed to be torques and restoring forces due to the torsion spring at each joint. Hamilton's principle is used to determine the equations of motion and the finite element method is adopted to analyze the system. Newmark time integration and Newton-Raphson iteration methods are used to solve for the non-linear equations of motion at each time step. The locking at the joints of the multi-link flexible structure is analyzed by the momentum balance method. Numerical results are compared with the previous experimental data. The angles and angular velocities of each joint, tip displacement, and velocity of each link are investigated to study the motions of the links at each time step. To analyze the effect of thickness on the motion of the link, the angle and the tip displacement of each link are compared according to the various slenderness ratios. Additionally, in order to investigate the effect of shear, the tip displacements of a Timoshenko beam are compared with those of an Euler-Bernoulli beam.

  16. A novel framework for feature extraction in multi-sensor action potential sorting.

    PubMed

    Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran

    2015-09-30

    Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components

    NASA Technical Reports Server (NTRS)

    Reck, Theodore (Inventor); Perez, Jose Vicente Siles (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Jung-Kubiak, Cecile (Inventor); Mehdi, Imran (Inventor); Chattopadhyay, Goutam (Inventor); Lin, Robert H. (Inventor); Peralta, Alejandro (Inventor)

    2016-01-01

    A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.

  18. A direct Arbitrary-Lagrangian-Eulerian ADER-WENO finite volume scheme on unstructured tetrahedral meshes for conservative and non-conservative hyperbolic systems in 3D

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael

    2014-10-01

    In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with stiff relaxation source terms.

  19. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    NASA Astrophysics Data System (ADS)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.

  20. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less

  1. Quantification of the multi-streaming effect in redshift space distortion

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Oh, Minji

    2017-05-01

    Both multi-streaming (random motion) and bulk motion cause the Finger-of-God (FoG) effect in redshift space distortion (RSD). We apply a direct measurement of the multi-streaming effect in RSD from simulations, proving that it induces an additional, non-negligible FoG damping to the redshift space density power spectrum. We show that, including the multi-streaming effect, the RSD modelling is significantly improved. We also provide a theoretical explanation based on halo model for the measured effect, including a fitting formula with one to two free parameters. The improved understanding of FoG helps break the fσ8-σv degeneracy in RSD cosmology, and has the potential of significantly improving cosmological constraints.

  2. Performance analysis of MISO multi-hop FSO links over log-normal channels with fog and beam divergence attenuations

    NASA Astrophysics Data System (ADS)

    Abaza, Mohamed; Mesleh, Raed; Mansour, Ali; Aggoune, el-Hadi

    2015-01-01

    The performance analysis of a multi-hop decode and forward relaying free-space optical (FSO) communication system is presented in this paper. The considered FSO system uses intensity modulation and direct detection as means of transmission and reception. Atmospheric turbulence impacts are modeled as a log-normal channel, and different weather attenuation effects and geometric losses are taken into account. It is shown that multi-hop is an efficient technique to mitigate such effects in FSO communication systems. A comparison with direct link and multiple-input single-output (MISO) systems considering correlation effects at the transmitter is provided. Results show that MISO multi-hop FSO systems are superior than their counterparts over links exhibiting high attenuation. Monte Carlo simulation results are provided to validate the bit error rate (BER) analyses and conclusions.

  3. What experimental experience affects dogs' comprehension of human communicative actions?

    PubMed

    Hauser, Marc D; Comins, Jordan A; Pytka, Lisa M; Cahill, Donal P; Velez-Calderon, Sofia

    2011-01-01

    Studies of dogs report that individuals reliably respond to the goal-directed communicative actions (e.g., pointing) of human experimenters. All of these studies use some version of a multi-trial approach, thereby allowing for the possibility of rapid learning within an experimental session. The experiments reported here ask whether dogs can respond correctly to a communicative action based on only a single presentation, thereby eliminating the possibility of learning within the experimental context. We tested 173 dogs. For each dog reaching our test criteria, we used a single presentation of six different goal-directed actions within a session, asking whether they correctly follow to a target goal (container with concealed food) a (1) distal hand point, (2) step toward one container, (3) hand point to one container followed by step toward the other, (4) step toward one container and point to the other, (5) distal foot point with the experimenter's hands free, and (6) distal foot point with the experimenter's hands occupied. Given only a single presentation, dogs selected the correct container when the experimenter hand pointed, foot pointed with hands occupied, or stepped closer to the target container, but failed on the other actions, despite using the same method. The fact that dogs correctly followed foot pointing with hands occupied, but not hands free, suggests that they are sensitive to environmental constraints, and use this information to infer rational, goal-directed action. We discuss these results in light of the role of experience in recognizing communicative gestures, as well as the significance of coding criteria for studies of canine competence. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  5. Implementation on a nonlinear concrete cracking algorithm in NASTRAN

    NASA Technical Reports Server (NTRS)

    Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.

    1976-01-01

    A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.

  6. Direct dynamic kinetic analysis and computer simulation of growth of Clostridium perfringens in cooked turkey during cooling

    USDA-ARS?s Scientific Manuscript database

    This research applied a new one-step methodology to directly construct a tertiary model for describing the growth of C. perfringens in cooked turkey meat under dynamically cooling conditions. The kinetic parameters of the growth models were determined by numerical analysis and optimization using mu...

  7. Comparison of 1-step and 2-step methods of fitting microbiological models.

    PubMed

    Jewell, Keith

    2012-11-15

    Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Contributions of dopamine-related genes and environmental factors to highly sensitive personality: a multi-step neuronal system-level approach.

    PubMed

    Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi

    2011-01-01

    Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies.

  9. Contributions of Dopamine-Related Genes and Environmental Factors to Highly Sensitive Personality: A Multi-Step Neuronal System-Level Approach

    PubMed Central

    Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi

    2011-01-01

    Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies. PMID:21765900

  10. Laser Scanning Holographic Lithography for Flexible 3D Fabrication of Multi-Scale Integrated Nano-structures and Optical Biosensors

    PubMed Central

    Yuan, Liang (Leon); Herman, Peter R.

    2016-01-01

    Three-dimensional (3D) periodic nanostructures underpin a promising research direction on the frontiers of nanoscience and technology to generate advanced materials for exploiting novel photonic crystal (PC) and nanofluidic functionalities. However, formation of uniform and defect-free 3D periodic structures over large areas that can further integrate into multifunctional devices has remained a major challenge. Here, we introduce a laser scanning holographic method for 3D exposure in thick photoresist that combines the unique advantages of large area 3D holographic interference lithography (HIL) with the flexible patterning of laser direct writing to form both micro- and nano-structures in a single exposure step. Phase mask interference patterns accumulated over multiple overlapping scans are shown to stitch seamlessly and form uniform 3D nanostructure with beam size scaled to small 200 μm diameter. In this way, laser scanning is presented as a facile means to embed 3D PC structure within microfluidic channels for integration into an optofluidic lab-on-chip, demonstrating a new laser HIL writing approach for creating multi-scale integrated microsystems. PMID:26922872

  11. Laser direct-write for fabrication of three-dimensional paper-based devices.

    PubMed

    He, P J W; Katis, I N; Eason, R W; Sones, C L

    2016-08-16

    We report the use of a laser-based direct-write (LDW) technique that allows the design and fabrication of three-dimensional (3D) structures within a paper substrate that enables implementation of multi-step analytical assays via a 3D protocol. The technique is based on laser-induced photo-polymerisation, and through adjustment of the laser writing parameters such as the laser power and scan speed we can control the depths of hydrophobic barriers that are formed within a substrate which, when carefully designed and integrated, produce 3D flow paths. So far, we have successfully used this depth-variable patterning protocol for stacking and sealing of multi-layer substrates, for assembly of backing layers for two-dimensional (2D) lateral flow devices and finally for fabrication of 3D devices. Since the 3D flow paths can also be formed via a single laser-writing process by controlling the patterning parameters, this is a distinct improvement over other methods that require multiple complicated and repetitive assembly procedures. This technique is therefore suitable for cheap, rapid and large-scale fabrication of 3D paper-based microfluidic devices.

  12. 3D Model of the Neal Hot Springs Geothermal Area

    DOE Data Explorer

    Faulds, James E.

    2013-12-31

    The Neal Hot Springs geothermal system lies in a left-step in a north-striking, west-dipping normal fault system, consisting of the Neal Fault to the south and the Sugarloaf Butte Fault to the north (Edwards, 2013). The Neal Hot Springs 3D geologic model consists of 104 faults and 13 stratigraphic units. The stratigraphy is sub-horizontal to dipping <10 degrees and there is no predominant dip-direction. Geothermal production is exclusively from the Neal Fault south of, and within the step-over, while geothermal injection is into both the Neal Fault to the south of the step-over and faults within the step-over.

  13. Ab Initio calculation on magnetism of monatomic Fe nanowire on Au (111) surface

    NASA Astrophysics Data System (ADS)

    Yasui, Takashi; Nawate, Masahiko

    2010-01-01

    The magnetic anisotropy of the one-dimensional monatomic Fe wire on the Au (111) texture has been theoretically analyzed using Wien2k flamework. The model simulates experimentally observed ferromagnetic Fe monatomic wire self-organized along the terrace edge of the Au (788) plane, which exhibits the magnetizaiton perpendicular both the wire and Au plane. In the case of the model consisting the one-dimensional Fe wire placed on the Au (111) plane with the Au lattice cite, no significant anisotropy is resulted by the calculation. On the other hand, the model where the Fe wire is formed along the Au terrace like step indicates the anisotropy of which easy direction is along the wire, resulting in differenct direction from the experiment. When we introduce the disorder in the Fe wire array, the easy direction changes. As for the model that the every other Fe atoms are slightly closer to the Au step (approx 0.0091 nm) the easy direction turns to be perpendicular to the wire and parallel to the Au plane, that is, the dislocation direction. The disorder in the Fe wire seems to play significant roll in the anisotropy.

  14. Modeling Woven Polymer Matrix Composites with MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)

    2000-01-01

    NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.

  15. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  16. lumpR 2.0.0: an R package facilitating landscape discretisation for hillslope-based hydrological models

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2017-08-01

    The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all. Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation. In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.

  17. 76 FR 74667 - Airworthiness Directives; The Boeing Company Model 737-200, -200C, -300, -400, and -500 Series...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... at the chem-milled steps, which could result in sudden fracture and failure of the fuselage skin... chem-milled steps, which could result in sudden fracture and failure of the fuselage skin panels, and...

  18. Managing Multi-center Flow Cytometry Data for Immune Monitoring

    PubMed Central

    White, Scott; Laske, Karoline; Welters, Marij JP; Bidmon, Nicole; van der Burg, Sjoerd H; Britten, Cedrik M; Enzor, Jennifer; Staats, Janet; Weinhold, Kent J; Gouttefangeas, Cécile; Chan, Cliburn

    2014-01-01

    With the recent results of promising cancer vaccines and immunotherapy1–5, immune monitoring has become increasingly relevant for measuring treatment-induced effects on T cells, and an essential tool for shedding light on the mechanisms responsible for a successful treatment. Flow cytometry is the canonical multi-parameter assay for the fine characterization of single cells in solution, and is ubiquitously used in pre-clinical tumor immunology and in cancer immunotherapy trials. Current state-of-the-art polychromatic flow cytometry involves multi-step, multi-reagent assays followed by sample acquisition on sophisticated instruments capable of capturing up to 20 parameters per cell at a rate of tens of thousands of cells per second. Given the complexity of flow cytometry assays, reproducibility is a major concern, especially for multi-center studies. A promising approach for improving reproducibility is the use of automated analysis borrowing from statistics, machine learning and information visualization21–23, as these methods directly address the subjectivity, operator-dependence, labor-intensive and low fidelity of manual analysis. However, it is quite time-consuming to investigate and test new automated analysis techniques on large data sets without some centralized information management system. For large-scale automated analysis to be practical, the presence of consistent and high-quality data linked to the raw FCS files is indispensable. In particular, the use of machine-readable standard vocabularies to characterize channel metadata is essential when constructing analytic pipelines to avoid errors in processing, analysis and interpretation of results. For automation, this high-quality metadata needs to be programmatically accessible, implying the need for a consistent Application Programming Interface (API). In this manuscript, we propose that upfront time spent normalizing flow cytometry data to conform to carefully designed data models enables automated analysis, potentially saving time in the long run. The ReFlow informatics framework was developed to address these data management challenges. PMID:26085786

  19. Models of evaluation of public joint-stock property management

    NASA Astrophysics Data System (ADS)

    Yakupova, N. M.; Levachkova, S.; Absalyamova, S. G.; Kvon, G.

    2017-12-01

    The paper deals with the models of evaluation of performance of both the management company and the individual subsidiaries on the basis of a combination of elements and multi-parameter and target approaches. The article shows that due to the power of multi-dimensional and multi-directional indicators of financial and economic activity it is necessary to assess the degree of achievement of the objectives with the use of multivariate ordinal model as a set of indicators, ordered by growth so that the maintenance of this order on a long interval of time will ensure the effective functioning of the enterprise in the long term. It is shown that these models can be regarded as the monitoring tools of implementation of strategies and guide the justification effectiveness of implementation of management decisions.

  20. Computational modeling of the effect of external electron injection into a direct-current microdischarge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panneer Chelvam, Prem Kumar; Raja, Laxminarayan L.

    2015-12-28

    Electron emission from the electrode surface plays an important role in determining the structure of a direct-current microdischarge. Here we have developed a computational model of a direct-current microdischarge to study the effect of external electron injection from the cathode surface into the discharge to manipulate its properties. The model provides a self-consistent, multi-species, multi-temperature fluid representation of the plasma. A microdischarge with a metal-insulator-metal configuration is chosen for this study. The effect of external electron injection on the structure and properties of the microdischarge is described. The transient behavior of the microdischarge during the electron injection is examined. Themore » nonlinearities in the dynamics of the plasma result in a large increase of conduction current after active electron injection. For the conditions simulated a switching time of ∼100 ns from a low-current to high-current discharge state is realized.« less

Top